var/home/core/zuul-output/0000755000175000017500000000000015126327371014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015126334053015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000002001225715126334044017676 0ustar rootrootJan 04 00:10:25 crc systemd[1]: Starting Kubernetes Kubelet... Jan 04 00:10:26 crc kubenswrapper[5108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 04 00:10:26 crc kubenswrapper[5108]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 04 00:10:26 crc kubenswrapper[5108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 04 00:10:26 crc kubenswrapper[5108]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 04 00:10:26 crc kubenswrapper[5108]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 04 00:10:26 crc kubenswrapper[5108]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.209042 5108 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212533 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212553 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212560 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212596 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212603 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212608 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212614 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212619 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212624 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212629 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212634 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212640 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212645 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212651 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212656 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212661 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212665 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212670 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212675 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212680 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212684 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212689 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212694 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212699 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212703 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212708 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212713 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212718 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212723 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212728 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212733 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212738 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212743 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212748 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212752 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212757 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212762 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212767 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212771 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212776 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212795 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212801 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212806 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212812 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212816 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212821 5108 feature_gate.go:328] unrecognized feature gate: Example2 Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212827 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212831 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212836 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212841 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212845 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212850 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212854 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212859 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212864 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212870 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212875 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212880 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212886 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212892 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212899 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212905 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212912 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212918 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212923 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212928 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212932 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212937 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212945 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212951 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212957 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212962 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212968 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212973 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212979 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212985 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212990 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.212996 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213001 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213006 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213011 5108 feature_gate.go:328] unrecognized feature gate: Example Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213016 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213021 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213026 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213031 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213035 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213754 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213763 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213768 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213774 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213780 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213785 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213789 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213794 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213800 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213805 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213810 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213815 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213819 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213824 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213834 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213839 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213843 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213848 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213852 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213857 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213862 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213870 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213875 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213879 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213884 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213889 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213894 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213899 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213903 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213908 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213915 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213921 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213926 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213932 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213937 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213942 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213947 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213951 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213956 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213961 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213966 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213971 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213976 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213981 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213985 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213990 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.213996 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214001 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214005 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214010 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214017 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214022 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214027 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214035 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214039 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214044 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214049 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214054 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214058 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214064 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214068 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214073 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214079 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214085 5108 feature_gate.go:328] unrecognized feature gate: Example Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214090 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214095 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214100 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214104 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214111 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214116 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214120 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214125 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214130 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214134 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214140 5108 feature_gate.go:328] unrecognized feature gate: Example2 Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214145 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214149 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214155 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214160 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214165 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214169 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214174 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214179 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214183 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214188 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.214219 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214447 5108 flags.go:64] FLAG: --address="0.0.0.0" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214469 5108 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214480 5108 flags.go:64] FLAG: --anonymous-auth="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214488 5108 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214496 5108 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214502 5108 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214510 5108 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214519 5108 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214525 5108 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214530 5108 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214536 5108 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214542 5108 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214548 5108 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214553 5108 flags.go:64] FLAG: --cgroup-root="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214558 5108 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214563 5108 flags.go:64] FLAG: --client-ca-file="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214569 5108 flags.go:64] FLAG: --cloud-config="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214574 5108 flags.go:64] FLAG: --cloud-provider="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214579 5108 flags.go:64] FLAG: --cluster-dns="[]" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214587 5108 flags.go:64] FLAG: --cluster-domain="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214593 5108 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214600 5108 flags.go:64] FLAG: --config-dir="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214605 5108 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214612 5108 flags.go:64] FLAG: --container-log-max-files="5" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214619 5108 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214625 5108 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214630 5108 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214636 5108 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214641 5108 flags.go:64] FLAG: --contention-profiling="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214647 5108 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214652 5108 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214661 5108 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214667 5108 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214675 5108 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214680 5108 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214686 5108 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214691 5108 flags.go:64] FLAG: --enable-load-reader="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214697 5108 flags.go:64] FLAG: --enable-server="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214702 5108 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214710 5108 flags.go:64] FLAG: --event-burst="100" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214715 5108 flags.go:64] FLAG: --event-qps="50" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214721 5108 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214726 5108 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214732 5108 flags.go:64] FLAG: --eviction-hard="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214739 5108 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214744 5108 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214749 5108 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214754 5108 flags.go:64] FLAG: --eviction-soft="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214760 5108 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214766 5108 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214771 5108 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214776 5108 flags.go:64] FLAG: --experimental-mounter-path="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214782 5108 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214787 5108 flags.go:64] FLAG: --fail-swap-on="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214793 5108 flags.go:64] FLAG: --feature-gates="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214801 5108 flags.go:64] FLAG: --file-check-frequency="20s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214807 5108 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214812 5108 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214818 5108 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214823 5108 flags.go:64] FLAG: --healthz-port="10248" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214829 5108 flags.go:64] FLAG: --help="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214834 5108 flags.go:64] FLAG: --hostname-override="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214839 5108 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214848 5108 flags.go:64] FLAG: --http-check-frequency="20s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214854 5108 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214859 5108 flags.go:64] FLAG: --image-credential-provider-config="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214865 5108 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214870 5108 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214875 5108 flags.go:64] FLAG: --image-service-endpoint="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214881 5108 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214886 5108 flags.go:64] FLAG: --kube-api-burst="100" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214892 5108 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214897 5108 flags.go:64] FLAG: --kube-api-qps="50" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214903 5108 flags.go:64] FLAG: --kube-reserved="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214908 5108 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214915 5108 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214920 5108 flags.go:64] FLAG: --kubelet-cgroups="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214926 5108 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214931 5108 flags.go:64] FLAG: --lock-file="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214937 5108 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214942 5108 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214948 5108 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214957 5108 flags.go:64] FLAG: --log-json-split-stream="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214962 5108 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214967 5108 flags.go:64] FLAG: --log-text-split-stream="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214973 5108 flags.go:64] FLAG: --logging-format="text" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214978 5108 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214987 5108 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214993 5108 flags.go:64] FLAG: --manifest-url="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.214998 5108 flags.go:64] FLAG: --manifest-url-header="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215007 5108 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215012 5108 flags.go:64] FLAG: --max-open-files="1000000" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215020 5108 flags.go:64] FLAG: --max-pods="110" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215026 5108 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215031 5108 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215040 5108 flags.go:64] FLAG: --memory-manager-policy="None" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215045 5108 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215051 5108 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215056 5108 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215062 5108 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215076 5108 flags.go:64] FLAG: --node-status-max-images="50" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215082 5108 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215088 5108 flags.go:64] FLAG: --oom-score-adj="-999" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215093 5108 flags.go:64] FLAG: --pod-cidr="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215099 5108 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215110 5108 flags.go:64] FLAG: --pod-manifest-path="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215115 5108 flags.go:64] FLAG: --pod-max-pids="-1" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215121 5108 flags.go:64] FLAG: --pods-per-core="0" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215126 5108 flags.go:64] FLAG: --port="10250" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215132 5108 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215139 5108 flags.go:64] FLAG: --provider-id="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215144 5108 flags.go:64] FLAG: --qos-reserved="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215151 5108 flags.go:64] FLAG: --read-only-port="10255" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215156 5108 flags.go:64] FLAG: --register-node="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215162 5108 flags.go:64] FLAG: --register-schedulable="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215168 5108 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215178 5108 flags.go:64] FLAG: --registry-burst="10" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215183 5108 flags.go:64] FLAG: --registry-qps="5" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215188 5108 flags.go:64] FLAG: --reserved-cpus="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215194 5108 flags.go:64] FLAG: --reserved-memory="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215251 5108 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215258 5108 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215263 5108 flags.go:64] FLAG: --rotate-certificates="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215268 5108 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215274 5108 flags.go:64] FLAG: --runonce="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215279 5108 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215285 5108 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215293 5108 flags.go:64] FLAG: --seccomp-default="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215299 5108 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215304 5108 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215325 5108 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215356 5108 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215362 5108 flags.go:64] FLAG: --storage-driver-password="root" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215367 5108 flags.go:64] FLAG: --storage-driver-secure="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215373 5108 flags.go:64] FLAG: --storage-driver-table="stats" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215378 5108 flags.go:64] FLAG: --storage-driver-user="root" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215384 5108 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215390 5108 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215395 5108 flags.go:64] FLAG: --system-cgroups="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215401 5108 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215410 5108 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215415 5108 flags.go:64] FLAG: --tls-cert-file="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215420 5108 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215429 5108 flags.go:64] FLAG: --tls-min-version="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215434 5108 flags.go:64] FLAG: --tls-private-key-file="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215439 5108 flags.go:64] FLAG: --topology-manager-policy="none" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215445 5108 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215450 5108 flags.go:64] FLAG: --topology-manager-scope="container" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215456 5108 flags.go:64] FLAG: --v="2" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215464 5108 flags.go:64] FLAG: --version="false" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215471 5108 flags.go:64] FLAG: --vmodule="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215479 5108 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.215488 5108 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215614 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215621 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215627 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215632 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215637 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215643 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215650 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215656 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215661 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215666 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215671 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215676 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215681 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215686 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215691 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215696 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215700 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215707 5108 feature_gate.go:328] unrecognized feature gate: Example Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215712 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215717 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215722 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215727 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215732 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215737 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215742 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215747 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215753 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215757 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215762 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215767 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215772 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215779 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215784 5108 feature_gate.go:328] unrecognized feature gate: Example2 Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215790 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215795 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215800 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215805 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215811 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215818 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215823 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215828 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215834 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215839 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215844 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215849 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215854 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215859 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215864 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215868 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215873 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215878 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215883 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215888 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215893 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215898 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215903 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215908 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215913 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215918 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215923 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215928 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215933 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215938 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215946 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215951 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215956 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215961 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215967 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215971 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215976 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215986 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215992 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.215999 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216006 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216012 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216017 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216022 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216028 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216033 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216037 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216042 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216047 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216052 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216057 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216062 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.216067 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.216260 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.235023 5108 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.235066 5108 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235149 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235161 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235167 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235173 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235179 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235185 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235191 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235196 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235217 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235222 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235227 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235232 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235237 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235242 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235247 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235252 5108 feature_gate.go:328] unrecognized feature gate: Example2 Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235257 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235263 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235267 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235273 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235278 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235283 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235290 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235298 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235303 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235309 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235314 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235320 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235333 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235338 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235344 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235349 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235354 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235359 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235363 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235368 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235373 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235378 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235383 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235387 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235392 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235397 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235402 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235406 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235411 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235416 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235421 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235426 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235433 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235438 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235443 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235448 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235453 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235458 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235463 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235467 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235472 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235477 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235482 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235486 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235492 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235498 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235503 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235508 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235513 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235518 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235523 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235528 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235533 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235537 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235543 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235548 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235553 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235558 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235563 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235568 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235574 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235579 5108 feature_gate.go:328] unrecognized feature gate: Example Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235584 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235589 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235594 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235599 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235605 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235609 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235614 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235619 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.235628 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235786 5108 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235795 5108 feature_gate.go:328] unrecognized feature gate: Example2 Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235801 5108 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235806 5108 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235811 5108 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235816 5108 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235821 5108 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235826 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235831 5108 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235836 5108 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235843 5108 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235848 5108 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235855 5108 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235862 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235867 5108 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235873 5108 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235878 5108 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235882 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235887 5108 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235892 5108 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235897 5108 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235903 5108 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235909 5108 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235914 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235919 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235923 5108 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235928 5108 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235933 5108 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235938 5108 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235943 5108 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235948 5108 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235953 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235958 5108 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235962 5108 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235967 5108 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235972 5108 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235977 5108 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235982 5108 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235987 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235993 5108 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.235998 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236003 5108 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236008 5108 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236012 5108 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236018 5108 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236023 5108 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236028 5108 feature_gate.go:328] unrecognized feature gate: Example Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236033 5108 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236038 5108 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236043 5108 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236048 5108 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236053 5108 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236058 5108 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236063 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236068 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236073 5108 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236078 5108 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236083 5108 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236088 5108 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236093 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236098 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236103 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236108 5108 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236113 5108 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236120 5108 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236125 5108 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236131 5108 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236136 5108 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236141 5108 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236146 5108 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236151 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236156 5108 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236161 5108 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236166 5108 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236171 5108 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236176 5108 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236181 5108 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236186 5108 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236191 5108 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236196 5108 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236216 5108 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236221 5108 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236226 5108 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236231 5108 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236235 5108 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.236241 5108 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.236249 5108 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.236678 5108 server.go:962] "Client rotation is on, will bootstrap in background" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.239333 5108 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.242704 5108 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.242823 5108 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.243375 5108 server.go:1019] "Starting client certificate rotation" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.243510 5108 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.243591 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.252164 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.260188 5108 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.261454 5108 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.289267 5108 log.go:25] "Validated CRI v1 runtime API" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.310101 5108 log.go:25] "Validated CRI v1 image API" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.312344 5108 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.314929 5108 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-04-00-04-25-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.314986 5108 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.333663 5108 manager.go:217] Machine: {Timestamp:2026-01-04 00:10:26.332290379 +0000 UTC m=+0.320855485 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:b32cf431-599e-4ef4-b60f-ec5735cef856 BootID:d5d783a5-a674-4781-98e0-72a073e00d58 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ee:45:29 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ee:45:29 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:15:a6:d0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e8:3d:9c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:91:f6:ba Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fb:f2:0e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:16:16:37:78:37 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7a:cd:c1:69:dc:31 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.334015 5108 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.334226 5108 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.335340 5108 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.335382 5108 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.335592 5108 topology_manager.go:138] "Creating topology manager with none policy" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.335603 5108 container_manager_linux.go:306] "Creating device plugin manager" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.335626 5108 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.335775 5108 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.336145 5108 state_mem.go:36] "Initialized new in-memory state store" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.336619 5108 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.337129 5108 kubelet.go:491] "Attempting to sync node with API server" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.337155 5108 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.337170 5108 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.337183 5108 kubelet.go:397] "Adding apiserver pod source" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.337219 5108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.338972 5108 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.338997 5108 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.339728 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.339796 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.340343 5108 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.340362 5108 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.341771 5108 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.342054 5108 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.342503 5108 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.342936 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.342964 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.342974 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.342984 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.342993 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.343002 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.343011 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.343020 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.343032 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.343054 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.343067 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.343179 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.343652 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.343673 5108 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.344628 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.356005 5108 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.356095 5108 server.go:1295] "Started kubelet" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.356344 5108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.356466 5108 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 04 00:10:26 crc systemd[1]: Started Kubernetes Kubelet. Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.376878 5108 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.356702 5108 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.378410 5108 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.378413 5108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.378509 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18875ea177a1e48a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.35605313 +0000 UTC m=+0.344618216,LastTimestamp:2026-01-04 00:10:26.35605313 +0000 UTC m=+0.344618216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.379330 5108 server.go:317] "Adding debug handlers to kubelet server" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.379608 5108 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.379656 5108 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.379786 5108 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.379926 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.380104 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.381689 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.382259 5108 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.382296 5108 factory.go:55] Registering systemd factory Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.382309 5108 factory.go:223] Registration of the systemd container factory successfully Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.386800 5108 factory.go:153] Registering CRI-O factory Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.386824 5108 factory.go:223] Registration of the crio container factory successfully Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.386850 5108 factory.go:103] Registering Raw factory Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.386870 5108 manager.go:1196] Started watching for new ooms in manager Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.387951 5108 manager.go:319] Starting recovery of all containers Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412533 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412836 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412859 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412871 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412890 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412903 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412935 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412951 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412966 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.412986 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413003 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413029 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413044 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413064 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413082 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413098 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413112 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413127 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413146 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413159 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413174 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413185 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413216 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413227 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413240 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413258 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413270 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413289 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413311 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413322 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413334 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413354 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413365 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413380 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413393 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413409 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413420 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413432 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413449 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413460 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413475 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413487 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413501 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413513 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413525 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413624 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413639 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413691 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413747 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413759 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413818 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413861 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413877 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413890 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413903 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413915 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413968 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413981 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.413990 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414040 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414051 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414107 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414123 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414135 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414193 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414242 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414273 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414300 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414310 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414323 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414334 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414346 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414357 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414370 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414380 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414406 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414446 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414468 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414480 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414556 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414565 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414578 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414606 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414667 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414677 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414688 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414698 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414708 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414719 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414729 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414741 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414778 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414791 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414801 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414836 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414848 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414857 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414901 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414925 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414965 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414975 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414985 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.414996 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.415005 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.415076 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.415085 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.415095 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.415146 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.415155 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.415187 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.418813 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.418834 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.418945 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.418959 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.418983 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419003 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419016 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419028 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419070 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419082 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419094 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419139 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419151 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419176 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419188 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419213 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419224 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419312 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419325 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419334 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419362 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419412 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419426 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419436 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419445 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419485 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419508 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419521 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419530 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419556 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419567 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419591 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419616 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419627 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419640 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419651 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419676 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419698 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419708 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419720 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419729 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419740 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419749 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419761 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419769 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419803 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419815 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419825 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.419837 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.420682 5108 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.420735 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.420849 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.420894 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.420940 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421016 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421035 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421046 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421080 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421091 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421104 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421115 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421177 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421189 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421215 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421227 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421272 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421286 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421298 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421329 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421366 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421376 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421388 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421397 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421425 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421434 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421446 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421455 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421551 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421567 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421576 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421615 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421627 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421838 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421855 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421879 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421910 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421921 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421936 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421948 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421978 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.421990 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422000 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422011 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422041 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422052 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422062 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422072 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422084 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422094 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422105 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422117 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422131 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422149 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422160 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422173 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422183 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422207 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422219 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422229 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422243 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422253 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422265 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422276 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422289 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422300 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422374 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422384 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422396 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422405 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422416 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422426 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422436 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422447 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422457 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422469 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422480 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422491 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422500 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422510 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422521 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422530 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422541 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422552 5108 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422563 5108 reconstruct.go:97] "Volume reconstruction finished" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.422570 5108 reconciler.go:26] "Reconciler: start to sync state" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.427103 5108 manager.go:324] Recovery completed Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.428802 5108 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/ocp-mco-sshkey.service": inotify_add_watch /sys/fs/cgroup/system.slice/ocp-mco-sshkey.service: no such file or directory Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.441942 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.443482 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.443527 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.443542 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.444366 5108 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.444460 5108 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.444528 5108 state_mem.go:36] "Initialized new in-memory state store" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.445040 5108 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.447484 5108 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.447529 5108 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.447571 5108 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.447585 5108 kubelet.go:2451] "Starting kubelet main sync loop" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.447639 5108 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.448257 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.451380 5108 policy_none.go:49] "None policy: Start" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.451408 5108 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.451422 5108 state_mem.go:35] "Initializing new in-memory state store" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.482694 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.510668 5108 manager.go:341] "Starting Device Plugin manager" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.510950 5108 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.510965 5108 server.go:85] "Starting device plugin registration server" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.511438 5108 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.511452 5108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.511797 5108 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.511860 5108 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.511866 5108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.517176 5108 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.517352 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.548701 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.549117 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.550551 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.550620 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.550636 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.551374 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.551673 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.551767 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.552540 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.552598 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.552616 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.552761 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.552795 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.552808 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.553560 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.553697 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.553752 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.554145 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.554173 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.554186 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.554372 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.554412 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.554429 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.554979 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.555238 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.555322 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.555527 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.555558 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.555572 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.556029 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.556073 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.556088 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.556645 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.556768 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.556815 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.557147 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.557214 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.557233 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.557323 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.557368 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.557386 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.559055 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.559121 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.560076 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.560123 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.560141 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.581733 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.595312 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.603373 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.611898 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.612904 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.612961 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.612973 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.613002 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.613619 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.621658 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626249 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626534 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626576 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626596 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626618 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626643 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626663 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626683 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626751 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626791 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626863 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.626898 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627005 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627006 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627060 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627109 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627141 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627303 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627400 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627443 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627469 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627339 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627472 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627566 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627593 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627618 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627664 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627692 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.627906 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.640114 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.646806 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729219 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729266 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729289 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729313 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729334 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729374 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729388 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729395 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729409 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729519 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729647 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729701 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729744 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729802 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729804 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729822 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729842 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729890 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729893 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729904 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729937 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729947 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.729967 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.730011 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.730041 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.730046 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.730060 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.730114 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.730125 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.730158 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.730165 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.730184 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.814628 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.816115 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.816182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.816194 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.816259 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.816930 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.896673 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.904510 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.922544 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.940676 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.941877 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-b2bdd29e4f49ed6c031ac42a14db609911eec4c95cfd8b36b5304a6d62eb47ff WatchSource:0}: Error finding container b2bdd29e4f49ed6c031ac42a14db609911eec4c95cfd8b36b5304a6d62eb47ff: Status 404 returned error can't find the container with id b2bdd29e4f49ed6c031ac42a14db609911eec4c95cfd8b36b5304a6d62eb47ff Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.946085 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-40005cb23b95e7441c0ca016e5e61c6a8a27cdb1ad5e38d3eeb032ceb1c3bf0f WatchSource:0}: Error finding container 40005cb23b95e7441c0ca016e5e61c6a8a27cdb1ad5e38d3eeb032ceb1c3bf0f: Status 404 returned error can't find the container with id 40005cb23b95e7441c0ca016e5e61c6a8a27cdb1ad5e38d3eeb032ceb1c3bf0f Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.947282 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:26 crc kubenswrapper[5108]: I0104 00:10:26.951186 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.965152 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-c2d3c8e29848e4a9ca372238002d29c2224f0dc59cf989bb05c639da14cd7ccd WatchSource:0}: Error finding container c2d3c8e29848e4a9ca372238002d29c2224f0dc59cf989bb05c639da14cd7ccd: Status 404 returned error can't find the container with id c2d3c8e29848e4a9ca372238002d29c2224f0dc59cf989bb05c639da14cd7ccd Jan 04 00:10:26 crc kubenswrapper[5108]: W0104 00:10:26.967411 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-fb8ec504d0af9b5a245316386b0c3251fae5c562c21205815d66c4bac9c9c54c WatchSource:0}: Error finding container fb8ec504d0af9b5a245316386b0c3251fae5c562c21205815d66c4bac9c9c54c: Status 404 returned error can't find the container with id fb8ec504d0af9b5a245316386b0c3251fae5c562c21205815d66c4bac9c9c54c Jan 04 00:10:26 crc kubenswrapper[5108]: E0104 00:10:26.983459 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.217713 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.219915 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.219963 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.219974 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.220000 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:10:27 crc kubenswrapper[5108]: E0104 00:10:27.220696 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.349114 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 04 00:10:27 crc kubenswrapper[5108]: E0104 00:10:27.397152 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.454166 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"40005cb23b95e7441c0ca016e5e61c6a8a27cdb1ad5e38d3eeb032ceb1c3bf0f"} Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.455288 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c2d3c8e29848e4a9ca372238002d29c2224f0dc59cf989bb05c639da14cd7ccd"} Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.456259 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fb8ec504d0af9b5a245316386b0c3251fae5c562c21205815d66c4bac9c9c54c"} Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.457172 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"b2bdd29e4f49ed6c031ac42a14db609911eec4c95cfd8b36b5304a6d62eb47ff"} Jan 04 00:10:27 crc kubenswrapper[5108]: I0104 00:10:27.458165 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"68903ae0e5c2f2437b695b129286ec479480a4a19f035a1d92275f419781a54d"} Jan 04 00:10:27 crc kubenswrapper[5108]: E0104 00:10:27.785683 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Jan 04 00:10:27 crc kubenswrapper[5108]: E0104 00:10:27.862978 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 04 00:10:27 crc kubenswrapper[5108]: E0104 00:10:27.943619 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 04 00:10:28 crc kubenswrapper[5108]: E0104 00:10:28.005740 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.021266 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.022989 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.023068 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.023081 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.023126 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:10:28 crc kubenswrapper[5108]: E0104 00:10:28.023897 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.349339 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.353272 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 04 00:10:28 crc kubenswrapper[5108]: E0104 00:10:28.354658 5108 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.462732 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b" exitCode=0 Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.462828 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b"} Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.463040 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.463915 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.463960 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.463977 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:28 crc kubenswrapper[5108]: E0104 00:10:28.464275 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.472581 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="76bbcaf7c19eae97cabab72b1af9ee18fd88354943af8dab060b9ab39179242a" exitCode=0 Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.472736 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"76bbcaf7c19eae97cabab72b1af9ee18fd88354943af8dab060b9ab39179242a"} Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.473051 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.475048 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.475104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.475123 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:28 crc kubenswrapper[5108]: E0104 00:10:28.476008 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.478167 5108 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732" exitCode=0 Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.478443 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.478511 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732"} Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.479004 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.479047 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.479068 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:28 crc kubenswrapper[5108]: E0104 00:10:28.479320 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.479717 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.480747 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.480815 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.480834 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.480961 5108 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="8a4534c19318d79d45b4218830f651d1cd0121733d43f00126b77092e620dbf6" exitCode=0 Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.481065 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"8a4534c19318d79d45b4218830f651d1cd0121733d43f00126b77092e620dbf6"} Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.481179 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.481645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.481673 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.481689 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.483213 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9b4eb4e10456fad30e3a03344ec2affe56bf2b509b098d5b2b3e0d405875b416"} Jan 04 00:10:28 crc kubenswrapper[5108]: I0104 00:10:28.483260 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e4871dd57f0ecd21f2d7f2b64f2493a0612dd77b89b0feeff7852b3ea1421b33"} Jan 04 00:10:28 crc kubenswrapper[5108]: E0104 00:10:28.488391 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:29 crc kubenswrapper[5108]: E0104 00:10:29.247370 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.349167 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 04 00:10:29 crc kubenswrapper[5108]: E0104 00:10:29.387374 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="3.2s" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.497232 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cf77409fe9a2a06b6cee539ab960b8ffe727a07751479e7c45e6314efc896193"} Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.497279 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b50433c05b4e9462bc1aeb26ab699177676176c7912e3f3701262c4c809e3cc2"} Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.497294 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7b7d5d310358a9b842de277978eebe04b3dd67697935a4e7331293c8f2ce2c12"} Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.500014 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"527b9b2ed8353f00600d3385d2dd27e109b87532fe919428fc3fcd303846c1f2"} Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.500191 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.501395 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.501447 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.501459 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:29 crc kubenswrapper[5108]: E0104 00:10:29.501690 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.516789 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"2a126cd2de771b57582f22e51d037cc93cb4afd7c3d6afe7fce9b37e4386a8de"} Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.516836 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b46597ace50f1479ce247dd96257545e8ebd89d91ea8d25b96566c802bc5770c"} Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.519947 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"28b887516a54da7ea3f035c2831e5d2ceef4487d4328fb87020325e4818d991f"} Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.519985 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"255667ec678133d539daab501a5b98a62289ce5d0229da32b3582e57ad5a5c40"} Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.520170 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.522597 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.522637 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.522661 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:29 crc kubenswrapper[5108]: E0104 00:10:29.522918 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.537036 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799" exitCode=0 Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.537148 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799"} Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.537308 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.538005 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.538037 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.538047 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:29 crc kubenswrapper[5108]: E0104 00:10:29.538257 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:29 crc kubenswrapper[5108]: E0104 00:10:29.577845 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.630366 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.631576 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.631613 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.631624 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:29 crc kubenswrapper[5108]: I0104 00:10:29.631654 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:10:29 crc kubenswrapper[5108]: E0104 00:10:29.632220 5108 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 04 00:10:29 crc kubenswrapper[5108]: E0104 00:10:29.683217 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.457677 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.543193 5108 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921" exitCode=0 Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.543327 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921"} Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.543451 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.544493 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.544574 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.544598 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:30 crc kubenswrapper[5108]: E0104 00:10:30.545150 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.548955 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c47bf710a440275cb896551bbf5722ea9e9c8f9d57c0b0612e6041cf2a45fa2c"} Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.548991 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2d7a38395218096d15fda6992626e039e078f2bec25e625392f1b72f1fc46dcb"} Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.549157 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.550667 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.550714 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.550733 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:30 crc kubenswrapper[5108]: E0104 00:10:30.551066 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.553192 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"2ba930f65c545366818d27dc41669dd09c8c81630dec8b3a9870c1bd42387201"} Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.553285 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.553300 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.553389 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.554150 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.554222 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.554236 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.554250 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.554285 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.554303 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.554241 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.554288 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:30 crc kubenswrapper[5108]: I0104 00:10:30.554385 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:30 crc kubenswrapper[5108]: E0104 00:10:30.554584 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:30 crc kubenswrapper[5108]: E0104 00:10:30.554648 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:30 crc kubenswrapper[5108]: E0104 00:10:30.554931 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.317922 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.324278 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.371516 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.560052 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"5600c53dc483245092b5d86d14ce5cd512c39f5cde0f47f32ba2d68c92d05cc4"} Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.560133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"5faa5d936dcf21f3645dc93fead84972db7b350c39f1ae1f4ba5ddb7af9d0f91"} Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.560153 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4770a34a9314b95470ad00e2ab4b5d3dc56c2a21e54866222ebe78dcd2f04ba9"} Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.560088 5108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.560268 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.560366 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.560494 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.560645 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.561217 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.561276 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.561296 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.561319 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.561350 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.561364 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.561389 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.561434 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:31 crc kubenswrapper[5108]: I0104 00:10:31.561457 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:31 crc kubenswrapper[5108]: E0104 00:10:31.561715 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:31 crc kubenswrapper[5108]: E0104 00:10:31.562026 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:31 crc kubenswrapper[5108]: E0104 00:10:31.563160 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.453289 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.570845 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2ea94b55e12c0f25dcd9c205306a29a282c096d4bbf535c91a6b5cc419be53f4"} Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.570944 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9eb8b844800fe1d272ec5c719cd0db94d9da63d845e436f1afbafda9fcf5c3ae"} Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.571064 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.571066 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.571239 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.572063 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.572114 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.572135 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.572246 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.572280 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.572299 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.572503 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.572583 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.572607 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:32 crc kubenswrapper[5108]: E0104 00:10:32.572708 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:32 crc kubenswrapper[5108]: E0104 00:10:32.572880 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:32 crc kubenswrapper[5108]: E0104 00:10:32.573389 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.781169 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.811341 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.832484 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.834091 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.834165 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.834194 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:32 crc kubenswrapper[5108]: I0104 00:10:32.834279 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.052315 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.052675 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.054044 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.054144 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.054166 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:33 crc kubenswrapper[5108]: E0104 00:10:33.054938 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.344363 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.442990 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.573607 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.573877 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.574080 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.574521 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.574571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.574586 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.574539 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.574748 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.574793 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:33 crc kubenswrapper[5108]: E0104 00:10:33.575035 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.575172 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.575252 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:33 crc kubenswrapper[5108]: I0104 00:10:33.575275 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:33 crc kubenswrapper[5108]: E0104 00:10:33.575188 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:33 crc kubenswrapper[5108]: E0104 00:10:33.575746 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:34 crc kubenswrapper[5108]: I0104 00:10:34.501952 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:34 crc kubenswrapper[5108]: I0104 00:10:34.576800 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:34 crc kubenswrapper[5108]: I0104 00:10:34.577084 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:34 crc kubenswrapper[5108]: I0104 00:10:34.578564 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:34 crc kubenswrapper[5108]: I0104 00:10:34.578628 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:34 crc kubenswrapper[5108]: I0104 00:10:34.578630 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:34 crc kubenswrapper[5108]: I0104 00:10:34.578650 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:34 crc kubenswrapper[5108]: I0104 00:10:34.578709 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:34 crc kubenswrapper[5108]: I0104 00:10:34.578843 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:34 crc kubenswrapper[5108]: E0104 00:10:34.579508 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:34 crc kubenswrapper[5108]: E0104 00:10:34.579957 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:35 crc kubenswrapper[5108]: I0104 00:10:35.781585 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 04 00:10:35 crc kubenswrapper[5108]: I0104 00:10:35.781717 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 04 00:10:36 crc kubenswrapper[5108]: E0104 00:10:36.517630 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 04 00:10:40 crc kubenswrapper[5108]: I0104 00:10:40.350117 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 04 00:10:40 crc kubenswrapper[5108]: I0104 00:10:40.729514 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 04 00:10:40 crc kubenswrapper[5108]: I0104 00:10:40.729953 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 04 00:10:40 crc kubenswrapper[5108]: I0104 00:10:40.882179 5108 trace.go:236] Trace[1314497609]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Jan-2026 00:10:30.879) (total time: 10002ms): Jan 04 00:10:40 crc kubenswrapper[5108]: Trace[1314497609]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:10:40.881) Jan 04 00:10:40 crc kubenswrapper[5108]: Trace[1314497609]: [10.002477047s] [10.002477047s] END Jan 04 00:10:40 crc kubenswrapper[5108]: E0104 00:10:40.882775 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 04 00:10:41 crc kubenswrapper[5108]: I0104 00:10:41.443087 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 04 00:10:41 crc kubenswrapper[5108]: I0104 00:10:41.443322 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 04 00:10:41 crc kubenswrapper[5108]: I0104 00:10:41.462054 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 04 00:10:41 crc kubenswrapper[5108]: I0104 00:10:41.462320 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 04 00:10:42 crc kubenswrapper[5108]: E0104 00:10:42.590277 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 04 00:10:42 crc kubenswrapper[5108]: I0104 00:10:42.849309 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 04 00:10:42 crc kubenswrapper[5108]: I0104 00:10:42.849972 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:42 crc kubenswrapper[5108]: I0104 00:10:42.851645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:42 crc kubenswrapper[5108]: I0104 00:10:42.851714 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:42 crc kubenswrapper[5108]: I0104 00:10:42.851727 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:42 crc kubenswrapper[5108]: E0104 00:10:42.852305 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:42 crc kubenswrapper[5108]: I0104 00:10:42.867217 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.450092 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.450474 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.451831 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.452617 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.452684 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:43 crc kubenswrapper[5108]: E0104 00:10:43.453403 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.456799 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.581297 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.581625 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.583718 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.583788 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.583802 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:43 crc kubenswrapper[5108]: E0104 00:10:43.584478 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.612636 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.612717 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.613994 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.614037 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.614050 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.614496 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.614551 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:43 crc kubenswrapper[5108]: I0104 00:10:43.614564 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:43 crc kubenswrapper[5108]: E0104 00:10:43.614574 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:43 crc kubenswrapper[5108]: E0104 00:10:43.615043 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:45 crc kubenswrapper[5108]: I0104 00:10:45.782058 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 04 00:10:45 crc kubenswrapper[5108]: I0104 00:10:45.782319 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.335521 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 04 00:10:46 crc kubenswrapper[5108]: I0104 00:10:46.436707 5108 trace.go:236] Trace[1277329238]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Jan-2026 00:10:33.381) (total time: 13055ms): Jan 04 00:10:46 crc kubenswrapper[5108]: Trace[1277329238]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 13055ms (00:10:46.436) Jan 04 00:10:46 crc kubenswrapper[5108]: Trace[1277329238]: [13.055386597s] [13.055386597s] END Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.436769 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 04 00:10:46 crc kubenswrapper[5108]: I0104 00:10:46.437942 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.438023 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.443991 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea177a1e48a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.35605313 +0000 UTC m=+0.344618216,LastTimestamp:2026-01-04 00:10:26.35605313 +0000 UTC m=+0.344618216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: I0104 00:10:46.444125 5108 trace.go:236] Trace[898148843]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Jan-2026 00:10:34.870) (total time: 11573ms): Jan 04 00:10:46 crc kubenswrapper[5108]: Trace[898148843]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 11573ms (00:10:46.444) Jan 04 00:10:46 crc kubenswrapper[5108]: Trace[898148843]: [11.57374024s] [11.57374024s] END Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.444190 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 04 00:10:46 crc kubenswrapper[5108]: I0104 00:10:46.444705 5108 trace.go:236] Trace[879600285]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Jan-2026 00:10:32.968) (total time: 13475ms): Jan 04 00:10:46 crc kubenswrapper[5108]: Trace[879600285]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 13475ms (00:10:46.444) Jan 04 00:10:46 crc kubenswrapper[5108]: Trace[879600285]: [13.475094968s] [13.475094968s] END Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.444881 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.450814 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd860ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443509998 +0000 UTC m=+0.432075094,LastTimestamp:2026-01-04 00:10:26.443509998 +0000 UTC m=+0.432075094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: I0104 00:10:46.455633 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.456427 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8c552 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443535698 +0000 UTC m=+0.432100794,LastTimestamp:2026-01-04 00:10:26.443535698 +0000 UTC m=+0.432100794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.461774 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8f361 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443547489 +0000 UTC m=+0.432112585,LastTimestamp:2026-01-04 00:10:26.443547489 +0000 UTC m=+0.432112585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.466972 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea1813c8d3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.517183805 +0000 UTC m=+0.505748891,LastTimestamp:2026-01-04 00:10:26.517183805 +0000 UTC m=+0.505748891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.471832 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd860ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd860ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443509998 +0000 UTC m=+0.432075094,LastTimestamp:2026-01-04 00:10:26.550594716 +0000 UTC m=+0.539159802,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.478185 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8c552\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8c552 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443535698 +0000 UTC m=+0.432100794,LastTimestamp:2026-01-04 00:10:26.550629927 +0000 UTC m=+0.539195013,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.484057 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8f361\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8f361 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443547489 +0000 UTC m=+0.432112585,LastTimestamp:2026-01-04 00:10:26.550642827 +0000 UTC m=+0.539207913,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.489440 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd860ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd860ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443509998 +0000 UTC m=+0.432075094,LastTimestamp:2026-01-04 00:10:26.552572949 +0000 UTC m=+0.541138035,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.498520 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8c552\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8c552 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443535698 +0000 UTC m=+0.432100794,LastTimestamp:2026-01-04 00:10:26.55260864 +0000 UTC m=+0.541173726,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.503927 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8f361\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8f361 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443547489 +0000 UTC m=+0.432112585,LastTimestamp:2026-01-04 00:10:26.55262322 +0000 UTC m=+0.541188306,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.510254 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd860ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd860ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443509998 +0000 UTC m=+0.432075094,LastTimestamp:2026-01-04 00:10:26.552784284 +0000 UTC m=+0.541349380,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.515230 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8c552\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8c552 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443535698 +0000 UTC m=+0.432100794,LastTimestamp:2026-01-04 00:10:26.552803115 +0000 UTC m=+0.541368201,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.517869 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.522726 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8f361\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8f361 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443547489 +0000 UTC m=+0.432112585,LastTimestamp:2026-01-04 00:10:26.552814405 +0000 UTC m=+0.541379491,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.529995 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd860ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd860ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443509998 +0000 UTC m=+0.432075094,LastTimestamp:2026-01-04 00:10:26.554164382 +0000 UTC m=+0.542729468,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.534421 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8c552\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8c552 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443535698 +0000 UTC m=+0.432100794,LastTimestamp:2026-01-04 00:10:26.554180042 +0000 UTC m=+0.542745128,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.539490 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8f361\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8f361 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443547489 +0000 UTC m=+0.432112585,LastTimestamp:2026-01-04 00:10:26.554192023 +0000 UTC m=+0.542757109,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.544395 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd860ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd860ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443509998 +0000 UTC m=+0.432075094,LastTimestamp:2026-01-04 00:10:26.554396518 +0000 UTC m=+0.542961604,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.549308 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8c552\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8c552 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443535698 +0000 UTC m=+0.432100794,LastTimestamp:2026-01-04 00:10:26.554419898 +0000 UTC m=+0.542984985,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.555673 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8f361\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8f361 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443547489 +0000 UTC m=+0.432112585,LastTimestamp:2026-01-04 00:10:26.554435469 +0000 UTC m=+0.543000565,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.560814 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd860ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd860ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443509998 +0000 UTC m=+0.432075094,LastTimestamp:2026-01-04 00:10:26.555548959 +0000 UTC m=+0.544114045,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.564814 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8c552\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8c552 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443535698 +0000 UTC m=+0.432100794,LastTimestamp:2026-01-04 00:10:26.55556711 +0000 UTC m=+0.544132206,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.569378 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8f361\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8f361 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443547489 +0000 UTC m=+0.432112585,LastTimestamp:2026-01-04 00:10:26.5555775 +0000 UTC m=+0.544142596,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.574011 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd860ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd860ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443509998 +0000 UTC m=+0.432075094,LastTimestamp:2026-01-04 00:10:26.556054072 +0000 UTC m=+0.544619168,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.578089 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18875ea17cd8c552\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18875ea17cd8c552 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.443535698 +0000 UTC m=+0.432100794,LastTimestamp:2026-01-04 00:10:26.556083073 +0000 UTC m=+0.544648159,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.582983 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea19b208077 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.951553143 +0000 UTC m=+0.940118229,LastTimestamp:2026-01-04 00:10:26.951553143 +0000 UTC m=+0.940118229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.587902 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18875ea19b228669 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.951685737 +0000 UTC m=+0.940250833,LastTimestamp:2026-01-04 00:10:26.951685737 +0000 UTC m=+0.940250833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.592999 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea19b5c8d78 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.955488632 +0000 UTC m=+0.944053718,LastTimestamp:2026-01-04 00:10:26.955488632 +0000 UTC m=+0.944053718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.597432 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea19c17fec8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.967772872 +0000 UTC m=+0.956337958,LastTimestamp:2026-01-04 00:10:26.967772872 +0000 UTC m=+0.956337958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.601931 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea19c7bf1f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:26.974323193 +0000 UTC m=+0.962888279,LastTimestamp:2026-01-04 00:10:26.974323193 +0000 UTC m=+0.962888279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.605705 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18875ea1c44777a9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.641972649 +0000 UTC m=+1.630537735,LastTimestamp:2026-01-04 00:10:27.641972649 +0000 UTC m=+1.630537735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.609550 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea1c448ec36 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.642068022 +0000 UTC m=+1.630633108,LastTimestamp:2026-01-04 00:10:27.642068022 +0000 UTC m=+1.630633108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.614018 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea1c4492bc4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.642084292 +0000 UTC m=+1.630649378,LastTimestamp:2026-01-04 00:10:27.642084292 +0000 UTC m=+1.630649378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.618001 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea1c44ada71 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.642194545 +0000 UTC m=+1.630759631,LastTimestamp:2026-01-04 00:10:27.642194545 +0000 UTC m=+1.630759631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.621946 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea1c44bd13f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.642257727 +0000 UTC m=+1.630822813,LastTimestamp:2026-01-04 00:10:27.642257727 +0000 UTC m=+1.630822813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.626084 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea1c53461d1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.657499089 +0000 UTC m=+1.646064175,LastTimestamp:2026-01-04 00:10:27.657499089 +0000 UTC m=+1.646064175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.630878 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea1c562df65 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.660545893 +0000 UTC m=+1.649110979,LastTimestamp:2026-01-04 00:10:27.660545893 +0000 UTC m=+1.649110979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.634396 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea1c565cb58 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.660737368 +0000 UTC m=+1.649302454,LastTimestamp:2026-01-04 00:10:27.660737368 +0000 UTC m=+1.649302454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.638241 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea1c5786ee2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.661958882 +0000 UTC m=+1.650523968,LastTimestamp:2026-01-04 00:10:27.661958882 +0000 UTC m=+1.650523968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.642295 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea1c578d382 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.661984642 +0000 UTC m=+1.650549728,LastTimestamp:2026-01-04 00:10:27.661984642 +0000 UTC m=+1.650549728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.647054 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18875ea1c57cb08f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:27.662237839 +0000 UTC m=+1.650802935,LastTimestamp:2026-01-04 00:10:27.662237839 +0000 UTC m=+1.650802935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.652375 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea1e3c774ac openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.170454188 +0000 UTC m=+2.159019274,LastTimestamp:2026-01-04 00:10:28.170454188 +0000 UTC m=+2.159019274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.657271 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea1e49b0398 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.184318872 +0000 UTC m=+2.172883958,LastTimestamp:2026-01-04 00:10:28.184318872 +0000 UTC m=+2.172883958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.662673 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea1e4adb1a4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.185543076 +0000 UTC m=+2.174108162,LastTimestamp:2026-01-04 00:10:28.185543076 +0000 UTC m=+2.174108162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.667489 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea1f56124f9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.465739001 +0000 UTC m=+2.454304127,LastTimestamp:2026-01-04 00:10:28.465739001 +0000 UTC m=+2.454304127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.672866 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea1f62f2259 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.479238745 +0000 UTC m=+2.467803841,LastTimestamp:2026-01-04 00:10:28.479238745 +0000 UTC m=+2.467803841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.678431 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18875ea1f63f1dfc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.480286204 +0000 UTC m=+2.468851290,LastTimestamp:2026-01-04 00:10:28.480286204 +0000 UTC m=+2.468851290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.684125 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea1f6fd3229 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.492743209 +0000 UTC m=+2.481308335,LastTimestamp:2026-01-04 00:10:28.492743209 +0000 UTC m=+2.481308335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.688490 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea20d9c5a4b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.872272459 +0000 UTC m=+2.860837545,LastTimestamp:2026-01-04 00:10:28.872272459 +0000 UTC m=+2.860837545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.693069 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18875ea20da6322a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.872917546 +0000 UTC m=+2.861482632,LastTimestamp:2026-01-04 00:10:28.872917546 +0000 UTC m=+2.861482632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.696967 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea20e3110c6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.882018502 +0000 UTC m=+2.870583588,LastTimestamp:2026-01-04 00:10:28.882018502 +0000 UTC m=+2.870583588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.701856 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea20eb68f0e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.890767118 +0000 UTC m=+2.879332204,LastTimestamp:2026-01-04 00:10:28.890767118 +0000 UTC m=+2.879332204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.706548 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea20ec6d418 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.891833368 +0000 UTC m=+2.880398454,LastTimestamp:2026-01-04 00:10:28.891833368 +0000 UTC m=+2.880398454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.711140 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea20ed1cf61 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.892553057 +0000 UTC m=+2.881118143,LastTimestamp:2026-01-04 00:10:28.892553057 +0000 UTC m=+2.881118143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.715482 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea20edae3c9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.893148105 +0000 UTC m=+2.881713191,LastTimestamp:2026-01-04 00:10:28.893148105 +0000 UTC m=+2.881713191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.719170 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea20eeb4146 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.894220614 +0000 UTC m=+2.882785700,LastTimestamp:2026-01-04 00:10:28.894220614 +0000 UTC m=+2.882785700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.723280 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18875ea20f319312 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.898829074 +0000 UTC m=+2.887394160,LastTimestamp:2026-01-04 00:10:28.898829074 +0000 UTC m=+2.887394160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.727086 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2142d58a2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:28.98243805 +0000 UTC m=+2.971003146,LastTimestamp:2026-01-04 00:10:28.98243805 +0000 UTC m=+2.971003146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.731879 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea21cff3ada openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.130410714 +0000 UTC m=+3.118975790,LastTimestamp:2026-01-04 00:10:29.130410714 +0000 UTC m=+3.118975790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.736511 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea21de85b33 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.145688883 +0000 UTC m=+3.134253969,LastTimestamp:2026-01-04 00:10:29.145688883 +0000 UTC m=+3.134253969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.742672 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea21dfd097d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.147044221 +0000 UTC m=+3.135609297,LastTimestamp:2026-01-04 00:10:29.147044221 +0000 UTC m=+3.135609297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.748875 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea21fc6998c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.177031052 +0000 UTC m=+3.165596128,LastTimestamp:2026-01-04 00:10:29.177031052 +0000 UTC m=+3.165596128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.754845 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea22010ecf5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.181902069 +0000 UTC m=+3.170467175,LastTimestamp:2026-01-04 00:10:29.181902069 +0000 UTC m=+3.170467175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.756223 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea22064db62 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.187402594 +0000 UTC m=+3.175967680,LastTimestamp:2026-01-04 00:10:29.187402594 +0000 UTC m=+3.175967680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.759384 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea22070e90e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.188192526 +0000 UTC m=+3.176757612,LastTimestamp:2026-01-04 00:10:29.188192526 +0000 UTC m=+3.176757612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.763506 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea220d24dfb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.194575355 +0000 UTC m=+3.183140441,LastTimestamp:2026-01-04 00:10:29.194575355 +0000 UTC m=+3.183140441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.769263 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea22235e810 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.21788008 +0000 UTC m=+3.206445166,LastTimestamp:2026-01-04 00:10:29.21788008 +0000 UTC m=+3.206445166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.774444 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea23062b25a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.455696474 +0000 UTC m=+3.444261560,LastTimestamp:2026-01-04 00:10:29.455696474 +0000 UTC m=+3.444261560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.780836 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea230f45b86 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.465242502 +0000 UTC m=+3.453807588,LastTimestamp:2026-01-04 00:10:29.465242502 +0000 UTC m=+3.453807588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.788042 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea2315cc9a2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.472086434 +0000 UTC m=+3.460651520,LastTimestamp:2026-01-04 00:10:29.472086434 +0000 UTC m=+3.460651520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.793702 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea2315ff6a0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.47229456 +0000 UTC m=+3.460859646,LastTimestamp:2026-01-04 00:10:29.47229456 +0000 UTC m=+3.460859646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.799676 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea232849c51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.491473489 +0000 UTC m=+3.480038575,LastTimestamp:2026-01-04 00:10:29.491473489 +0000 UTC m=+3.480038575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.804925 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea2329c101e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.493010462 +0000 UTC m=+3.481575548,LastTimestamp:2026-01-04 00:10:29.493010462 +0000 UTC m=+3.481575548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.810732 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18875ea233dc08ec openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.51398014 +0000 UTC m=+3.502545226,LastTimestamp:2026-01-04 00:10:29.51398014 +0000 UTC m=+3.502545226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.817592 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea235654721 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.539751713 +0000 UTC m=+3.528316789,LastTimestamp:2026-01-04 00:10:29.539751713 +0000 UTC m=+3.528316789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.823760 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea242c6e3c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.764252615 +0000 UTC m=+3.752817701,LastTimestamp:2026-01-04 00:10:29.764252615 +0000 UTC m=+3.752817701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.831961 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea246852440 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.827052608 +0000 UTC m=+3.815617694,LastTimestamp:2026-01-04 00:10:29.827052608 +0000 UTC m=+3.815617694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.840026 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea246855a64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.827066468 +0000 UTC m=+3.815631554,LastTimestamp:2026-01-04 00:10:29.827066468 +0000 UTC m=+3.815631554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.846362 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea2469f774e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.828777806 +0000 UTC m=+3.817342892,LastTimestamp:2026-01-04 00:10:29.828777806 +0000 UTC m=+3.817342892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.853074 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea247d13ed8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.848817368 +0000 UTC m=+3.837382454,LastTimestamp:2026-01-04 00:10:29.848817368 +0000 UTC m=+3.837382454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.860964 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea2535dbcf4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.042574068 +0000 UTC m=+4.031139154,LastTimestamp:2026-01-04 00:10:30.042574068 +0000 UTC m=+4.031139154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.872260 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea254a7ed1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.064213274 +0000 UTC m=+4.052778360,LastTimestamp:2026-01-04 00:10:30.064213274 +0000 UTC m=+4.052778360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.876870 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea27177f12d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.547607853 +0000 UTC m=+4.536172969,LastTimestamp:2026-01-04 00:10:30.547607853 +0000 UTC m=+4.536172969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.882146 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2819376b8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.817846968 +0000 UTC m=+4.806412054,LastTimestamp:2026-01-04 00:10:30.817846968 +0000 UTC m=+4.806412054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.887185 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea283967ec1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.851600065 +0000 UTC m=+4.840165151,LastTimestamp:2026-01-04 00:10:30.851600065 +0000 UTC m=+4.840165151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.892050 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea283b1a508 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.853379336 +0000 UTC m=+4.841944422,LastTimestamp:2026-01-04 00:10:30.853379336 +0000 UTC m=+4.841944422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.896650 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2925df515 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.099553045 +0000 UTC m=+5.088118131,LastTimestamp:2026-01-04 00:10:31.099553045 +0000 UTC m=+5.088118131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.903837 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea29350e556 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.115474262 +0000 UTC m=+5.104039348,LastTimestamp:2026-01-04 00:10:31.115474262 +0000 UTC m=+5.104039348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.909171 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2936de6b3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.117375155 +0000 UTC m=+5.105940241,LastTimestamp:2026-01-04 00:10:31.117375155 +0000 UTC m=+5.105940241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.914900 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2a0d67ae0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.34233264 +0000 UTC m=+5.330897736,LastTimestamp:2026-01-04 00:10:31.34233264 +0000 UTC m=+5.330897736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.919964 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2a181807e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.353540734 +0000 UTC m=+5.342105810,LastTimestamp:2026-01-04 00:10:31.353540734 +0000 UTC m=+5.342105810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.924782 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2a19477d3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.354783699 +0000 UTC m=+5.343348775,LastTimestamp:2026-01-04 00:10:31.354783699 +0000 UTC m=+5.343348775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.931669 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2af51d8bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.585298619 +0000 UTC m=+5.573863745,LastTimestamp:2026-01-04 00:10:31.585298619 +0000 UTC m=+5.573863745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.937250 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2b08e3c7b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.606033531 +0000 UTC m=+5.594598637,LastTimestamp:2026-01-04 00:10:31.606033531 +0000 UTC m=+5.594598637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.942778 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2b0b3009f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.608443039 +0000 UTC m=+5.597008165,LastTimestamp:2026-01-04 00:10:31.608443039 +0000 UTC m=+5.597008165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.947829 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2be0ba422 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.832355874 +0000 UTC m=+5.820920950,LastTimestamp:2026-01-04 00:10:31.832355874 +0000 UTC m=+5.820920950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:46 crc kubenswrapper[5108]: E0104 00:10:46.953095 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18875ea2bf10c000 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:31.849467904 +0000 UTC m=+5.838032980,LastTimestamp:2026-01-04 00:10:31.849467904 +0000 UTC m=+5.838032980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:46.959831 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 04 00:10:47 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-controller-manager-crc.18875ea3a9716426 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 04 00:10:47 crc kubenswrapper[5108]: body: Jan 04 00:10:47 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:35.781669926 +0000 UTC m=+9.770235062,LastTimestamp:2026-01-04 00:10:35.781669926 +0000 UTC m=+9.770235062,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 04 00:10:47 crc kubenswrapper[5108]: > Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:46.970158 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea3a97376df openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:35.781805791 +0000 UTC m=+9.770370927,LastTimestamp:2026-01-04 00:10:35.781805791 +0000 UTC m=+9.770370927,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:46.975811 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 04 00:10:47 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18875ea4d05f9d29 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 04 00:10:47 crc kubenswrapper[5108]: body: Jan 04 00:10:47 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:40.729783593 +0000 UTC m=+14.718348739,LastTimestamp:2026-01-04 00:10:40.729783593 +0000 UTC m=+14.718348739,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 04 00:10:47 crc kubenswrapper[5108]: > Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:46.979788 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea4d06448dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:40.730089692 +0000 UTC m=+14.718654838,LastTimestamp:2026-01-04 00:10:40.730089692 +0000 UTC m=+14.718654838,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:46.983664 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 04 00:10:47 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18875ea4fae61f83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 04 00:10:47 crc kubenswrapper[5108]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 04 00:10:47 crc kubenswrapper[5108]: Jan 04 00:10:47 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:41.443241859 +0000 UTC m=+15.431806935,LastTimestamp:2026-01-04 00:10:41.443241859 +0000 UTC m=+15.431806935,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 04 00:10:47 crc kubenswrapper[5108]: > Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:46.987752 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea4faedb869 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:41.443739753 +0000 UTC m=+15.432304829,LastTimestamp:2026-01-04 00:10:41.443739753 +0000 UTC m=+15.432304829,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:46.991863 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea4fae61f83\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 04 00:10:47 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18875ea4fae61f83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 04 00:10:47 crc kubenswrapper[5108]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 04 00:10:47 crc kubenswrapper[5108]: Jan 04 00:10:47 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:41.443241859 +0000 UTC m=+15.431806935,LastTimestamp:2026-01-04 00:10:41.462181621 +0000 UTC m=+15.450746737,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 04 00:10:47 crc kubenswrapper[5108]: > Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:46.995669 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea4faedb869\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea4faedb869 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:41.443739753 +0000 UTC m=+15.432304829,LastTimestamp:2026-01-04 00:10:41.462383226 +0000 UTC m=+15.450948332,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.000311 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 04 00:10:47 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-controller-manager-crc.18875ea5fd866d92 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 04 00:10:47 crc kubenswrapper[5108]: body: Jan 04 00:10:47 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:45.78226933 +0000 UTC m=+19.770834456,LastTimestamp:2026-01-04 00:10:45.78226933 +0000 UTC m=+19.770834456,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 04 00:10:47 crc kubenswrapper[5108]: > Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.003835 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18875ea5fd87e599 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:45.782365593 +0000 UTC m=+19.770930699,LastTimestamp:2026-01-04 00:10:45.782365593 +0000 UTC m=+19.770930699,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.093009 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44686->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.093182 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44686->192.168.126.11:17697: read: connection reset by peer" Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.093965 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.094080 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.098920 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 04 00:10:47 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18875ea64ba82d1f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44686->192.168.126.11:17697: read: connection reset by peer Jan 04 00:10:47 crc kubenswrapper[5108]: body: Jan 04 00:10:47 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:47.093103903 +0000 UTC m=+21.081668989,LastTimestamp:2026-01-04 00:10:47.093103903 +0000 UTC m=+21.081668989,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 04 00:10:47 crc kubenswrapper[5108]: > Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.103701 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea64baa83b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44686->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:47.093257138 +0000 UTC m=+21.081822234,LastTimestamp:2026-01-04 00:10:47.093257138 +0000 UTC m=+21.081822234,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.108456 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 04 00:10:47 crc kubenswrapper[5108]: &Event{ObjectMeta:{kube-apiserver-crc.18875ea64bb649e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 04 00:10:47 crc kubenswrapper[5108]: body: Jan 04 00:10:47 crc kubenswrapper[5108]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:47.09402877 +0000 UTC m=+21.082593856,LastTimestamp:2026-01-04 00:10:47.09402877 +0000 UTC m=+21.082593856,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 04 00:10:47 crc kubenswrapper[5108]: > Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.113176 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea64bb781b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:47.094108592 +0000 UTC m=+21.082673668,LastTimestamp:2026-01-04 00:10:47.094108592 +0000 UTC m=+21.082673668,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.354905 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.626438 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.628027 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c47bf710a440275cb896551bbf5722ea9e9c8f9d57c0b0612e6041cf2a45fa2c" exitCode=255 Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.628060 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c47bf710a440275cb896551bbf5722ea9e9c8f9d57c0b0612e6041cf2a45fa2c"} Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.628409 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.629158 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.629233 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.629249 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.629692 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:47 crc kubenswrapper[5108]: I0104 00:10:47.630129 5108 scope.go:117] "RemoveContainer" containerID="c47bf710a440275cb896551bbf5722ea9e9c8f9d57c0b0612e6041cf2a45fa2c" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.638051 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea2469f774e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea2469f774e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.828777806 +0000 UTC m=+3.817342892,LastTimestamp:2026-01-04 00:10:47.631966148 +0000 UTC m=+21.620531234,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.912954 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea2535dbcf4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea2535dbcf4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.042574068 +0000 UTC m=+4.031139154,LastTimestamp:2026-01-04 00:10:47.902733128 +0000 UTC m=+21.891298214,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:47 crc kubenswrapper[5108]: E0104 00:10:47.927559 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea254a7ed1a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea254a7ed1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.064213274 +0000 UTC m=+4.052778360,LastTimestamp:2026-01-04 00:10:47.922552815 +0000 UTC m=+21.911117901,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.356646 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.640443 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.640891 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.642442 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d24a901439ef6eaaf7208b44cf6142afce29c9d0aeed9e5ef162be07f373f60c" exitCode=255 Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.642530 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d24a901439ef6eaaf7208b44cf6142afce29c9d0aeed9e5ef162be07f373f60c"} Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.642581 5108 scope.go:117] "RemoveContainer" containerID="c47bf710a440275cb896551bbf5722ea9e9c8f9d57c0b0612e6041cf2a45fa2c" Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.642838 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.643659 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.643700 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.643711 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:48 crc kubenswrapper[5108]: E0104 00:10:48.644433 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:48 crc kubenswrapper[5108]: I0104 00:10:48.644747 5108 scope.go:117] "RemoveContainer" containerID="d24a901439ef6eaaf7208b44cf6142afce29c9d0aeed9e5ef162be07f373f60c" Jan 04 00:10:48 crc kubenswrapper[5108]: E0104 00:10:48.644996 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:10:48 crc kubenswrapper[5108]: E0104 00:10:48.652115 5108 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea6a82776b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:48.644949682 +0000 UTC m=+22.633514768,LastTimestamp:2026-01-04 00:10:48.644949682 +0000 UTC m=+22.633514768,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:49 crc kubenswrapper[5108]: E0104 00:10:49.006647 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 04 00:10:49 crc kubenswrapper[5108]: I0104 00:10:49.357530 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:49 crc kubenswrapper[5108]: I0104 00:10:49.648162 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 04 00:10:50 crc kubenswrapper[5108]: I0104 00:10:50.352764 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:50 crc kubenswrapper[5108]: I0104 00:10:50.729254 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:50 crc kubenswrapper[5108]: I0104 00:10:50.729625 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:50 crc kubenswrapper[5108]: I0104 00:10:50.730892 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:50 crc kubenswrapper[5108]: I0104 00:10:50.731035 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:50 crc kubenswrapper[5108]: I0104 00:10:50.731107 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:50 crc kubenswrapper[5108]: E0104 00:10:50.731528 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:50 crc kubenswrapper[5108]: I0104 00:10:50.731955 5108 scope.go:117] "RemoveContainer" containerID="d24a901439ef6eaaf7208b44cf6142afce29c9d0aeed9e5ef162be07f373f60c" Jan 04 00:10:50 crc kubenswrapper[5108]: E0104 00:10:50.732283 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:10:50 crc kubenswrapper[5108]: E0104 00:10:50.737000 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea6a82776b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea6a82776b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:48.644949682 +0000 UTC m=+22.633514768,LastTimestamp:2026-01-04 00:10:50.732252379 +0000 UTC m=+24.720817465,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:51 crc kubenswrapper[5108]: I0104 00:10:51.353949 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.354072 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.787580 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.788245 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.789417 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.789474 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.789489 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:52 crc kubenswrapper[5108]: E0104 00:10:52.789979 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.793408 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.838827 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.840191 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.840345 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.840453 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:52 crc kubenswrapper[5108]: I0104 00:10:52.840578 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:10:52 crc kubenswrapper[5108]: E0104 00:10:52.849938 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 04 00:10:53 crc kubenswrapper[5108]: I0104 00:10:53.356371 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:53 crc kubenswrapper[5108]: E0104 00:10:53.641113 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 04 00:10:53 crc kubenswrapper[5108]: E0104 00:10:53.646733 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 04 00:10:53 crc kubenswrapper[5108]: I0104 00:10:53.660624 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:53 crc kubenswrapper[5108]: I0104 00:10:53.661634 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:53 crc kubenswrapper[5108]: I0104 00:10:53.661703 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:53 crc kubenswrapper[5108]: I0104 00:10:53.661722 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:53 crc kubenswrapper[5108]: E0104 00:10:53.662276 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:54 crc kubenswrapper[5108]: I0104 00:10:54.353468 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:55 crc kubenswrapper[5108]: E0104 00:10:55.121288 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 04 00:10:55 crc kubenswrapper[5108]: I0104 00:10:55.353860 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:56 crc kubenswrapper[5108]: E0104 00:10:56.013838 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 04 00:10:56 crc kubenswrapper[5108]: I0104 00:10:56.355137 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:56 crc kubenswrapper[5108]: E0104 00:10:56.518335 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 04 00:10:56 crc kubenswrapper[5108]: E0104 00:10:56.881054 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 04 00:10:57 crc kubenswrapper[5108]: I0104 00:10:57.094331 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:10:57 crc kubenswrapper[5108]: I0104 00:10:57.094783 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:57 crc kubenswrapper[5108]: I0104 00:10:57.096150 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:57 crc kubenswrapper[5108]: I0104 00:10:57.096328 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:57 crc kubenswrapper[5108]: I0104 00:10:57.096372 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:57 crc kubenswrapper[5108]: E0104 00:10:57.097315 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:10:57 crc kubenswrapper[5108]: I0104 00:10:57.098122 5108 scope.go:117] "RemoveContainer" containerID="d24a901439ef6eaaf7208b44cf6142afce29c9d0aeed9e5ef162be07f373f60c" Jan 04 00:10:57 crc kubenswrapper[5108]: E0104 00:10:57.098609 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:10:57 crc kubenswrapper[5108]: E0104 00:10:57.105785 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea6a82776b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea6a82776b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:48.644949682 +0000 UTC m=+22.633514768,LastTimestamp:2026-01-04 00:10:57.098532202 +0000 UTC m=+31.087097328,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:10:57 crc kubenswrapper[5108]: I0104 00:10:57.357853 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:58 crc kubenswrapper[5108]: I0104 00:10:58.355947 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:59 crc kubenswrapper[5108]: I0104 00:10:59.354695 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:10:59 crc kubenswrapper[5108]: I0104 00:10:59.851013 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:10:59 crc kubenswrapper[5108]: I0104 00:10:59.852324 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:10:59 crc kubenswrapper[5108]: I0104 00:10:59.852367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:10:59 crc kubenswrapper[5108]: I0104 00:10:59.852381 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:10:59 crc kubenswrapper[5108]: I0104 00:10:59.852409 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:10:59 crc kubenswrapper[5108]: E0104 00:10:59.867099 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 04 00:11:00 crc kubenswrapper[5108]: I0104 00:11:00.356293 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:01 crc kubenswrapper[5108]: I0104 00:11:01.354244 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:02 crc kubenswrapper[5108]: I0104 00:11:02.355772 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:03 crc kubenswrapper[5108]: E0104 00:11:03.019375 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 04 00:11:03 crc kubenswrapper[5108]: I0104 00:11:03.356943 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:04 crc kubenswrapper[5108]: I0104 00:11:04.354683 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:05 crc kubenswrapper[5108]: I0104 00:11:05.353983 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:06 crc kubenswrapper[5108]: I0104 00:11:06.354720 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:06 crc kubenswrapper[5108]: E0104 00:11:06.519147 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 04 00:11:06 crc kubenswrapper[5108]: I0104 00:11:06.868120 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:06 crc kubenswrapper[5108]: I0104 00:11:06.869546 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:06 crc kubenswrapper[5108]: I0104 00:11:06.869619 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:06 crc kubenswrapper[5108]: I0104 00:11:06.869637 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:06 crc kubenswrapper[5108]: I0104 00:11:06.869678 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:11:06 crc kubenswrapper[5108]: E0104 00:11:06.882842 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 04 00:11:07 crc kubenswrapper[5108]: I0104 00:11:07.355331 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:08 crc kubenswrapper[5108]: I0104 00:11:08.356549 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:09 crc kubenswrapper[5108]: I0104 00:11:09.353045 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:10 crc kubenswrapper[5108]: E0104 00:11:10.026283 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 04 00:11:10 crc kubenswrapper[5108]: I0104 00:11:10.355666 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:11 crc kubenswrapper[5108]: I0104 00:11:11.355954 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:12 crc kubenswrapper[5108]: I0104 00:11:12.355066 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:12 crc kubenswrapper[5108]: I0104 00:11:12.449148 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:12 crc kubenswrapper[5108]: I0104 00:11:12.451365 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:12 crc kubenswrapper[5108]: I0104 00:11:12.451461 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:12 crc kubenswrapper[5108]: I0104 00:11:12.451478 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:12 crc kubenswrapper[5108]: E0104 00:11:12.452167 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:12 crc kubenswrapper[5108]: I0104 00:11:12.452586 5108 scope.go:117] "RemoveContainer" containerID="d24a901439ef6eaaf7208b44cf6142afce29c9d0aeed9e5ef162be07f373f60c" Jan 04 00:11:12 crc kubenswrapper[5108]: E0104 00:11:12.468908 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea2469f774e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea2469f774e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:29.828777806 +0000 UTC m=+3.817342892,LastTimestamp:2026-01-04 00:11:12.456006324 +0000 UTC m=+46.444571410,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:11:12 crc kubenswrapper[5108]: E0104 00:11:12.781425 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea2535dbcf4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea2535dbcf4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.042574068 +0000 UTC m=+4.031139154,LastTimestamp:2026-01-04 00:11:12.773363521 +0000 UTC m=+46.761928607,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:11:12 crc kubenswrapper[5108]: E0104 00:11:12.804939 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea254a7ed1a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea254a7ed1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:30.064213274 +0000 UTC m=+4.052778360,LastTimestamp:2026-01-04 00:11:12.799148446 +0000 UTC m=+46.787713552,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.060863 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.061102 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.062438 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.062479 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.062492 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:13 crc kubenswrapper[5108]: E0104 00:11:13.062842 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.355687 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.718445 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.721820 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b0f1571354032f22b3cd6e7c486cd35896f2ac470b5058a9b8574e1b4db51757"} Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.722156 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.723000 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.723104 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.723168 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:13 crc kubenswrapper[5108]: E0104 00:11:13.723759 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.883506 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.885088 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.885243 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.885320 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:13 crc kubenswrapper[5108]: I0104 00:11:13.885399 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:11:13 crc kubenswrapper[5108]: E0104 00:11:13.896042 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 04 00:11:13 crc kubenswrapper[5108]: E0104 00:11:13.979191 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 04 00:11:14 crc kubenswrapper[5108]: I0104 00:11:14.354276 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.354300 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.730400 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.731614 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.733676 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b0f1571354032f22b3cd6e7c486cd35896f2ac470b5058a9b8574e1b4db51757" exitCode=255 Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.733855 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b0f1571354032f22b3cd6e7c486cd35896f2ac470b5058a9b8574e1b4db51757"} Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.733977 5108 scope.go:117] "RemoveContainer" containerID="d24a901439ef6eaaf7208b44cf6142afce29c9d0aeed9e5ef162be07f373f60c" Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.735615 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.737727 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.737885 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.738017 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:15 crc kubenswrapper[5108]: E0104 00:11:15.738583 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:15 crc kubenswrapper[5108]: I0104 00:11:15.739053 5108 scope.go:117] "RemoveContainer" containerID="b0f1571354032f22b3cd6e7c486cd35896f2ac470b5058a9b8574e1b4db51757" Jan 04 00:11:15 crc kubenswrapper[5108]: E0104 00:11:15.739548 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:11:15 crc kubenswrapper[5108]: E0104 00:11:15.747711 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea6a82776b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea6a82776b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:48.644949682 +0000 UTC m=+22.633514768,LastTimestamp:2026-01-04 00:11:15.739496748 +0000 UTC m=+49.728061854,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:11:16 crc kubenswrapper[5108]: I0104 00:11:16.354055 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:16 crc kubenswrapper[5108]: E0104 00:11:16.520501 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 04 00:11:16 crc kubenswrapper[5108]: I0104 00:11:16.738564 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 04 00:11:17 crc kubenswrapper[5108]: E0104 00:11:17.036542 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 04 00:11:17 crc kubenswrapper[5108]: I0104 00:11:17.357296 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:17 crc kubenswrapper[5108]: E0104 00:11:17.526230 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 04 00:11:18 crc kubenswrapper[5108]: I0104 00:11:18.354528 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:19 crc kubenswrapper[5108]: I0104 00:11:19.354481 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:19 crc kubenswrapper[5108]: E0104 00:11:19.955043 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 04 00:11:20 crc kubenswrapper[5108]: E0104 00:11:20.059108 5108 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.356849 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.729286 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.729671 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.730799 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.730833 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.730849 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:20 crc kubenswrapper[5108]: E0104 00:11:20.731265 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.731624 5108 scope.go:117] "RemoveContainer" containerID="b0f1571354032f22b3cd6e7c486cd35896f2ac470b5058a9b8574e1b4db51757" Jan 04 00:11:20 crc kubenswrapper[5108]: E0104 00:11:20.731853 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:11:20 crc kubenswrapper[5108]: E0104 00:11:20.737437 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea6a82776b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea6a82776b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:48.644949682 +0000 UTC m=+22.633514768,LastTimestamp:2026-01-04 00:11:20.731823656 +0000 UTC m=+54.720388742,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.897123 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.898794 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.898839 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.898850 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:20 crc kubenswrapper[5108]: I0104 00:11:20.898879 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:11:20 crc kubenswrapper[5108]: E0104 00:11:20.910262 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 04 00:11:21 crc kubenswrapper[5108]: I0104 00:11:21.353230 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:22 crc kubenswrapper[5108]: I0104 00:11:22.358118 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:23 crc kubenswrapper[5108]: I0104 00:11:23.357911 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:23 crc kubenswrapper[5108]: I0104 00:11:23.723016 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:11:23 crc kubenswrapper[5108]: I0104 00:11:23.723509 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:23 crc kubenswrapper[5108]: I0104 00:11:23.724914 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:23 crc kubenswrapper[5108]: I0104 00:11:23.724991 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:23 crc kubenswrapper[5108]: I0104 00:11:23.725014 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:23 crc kubenswrapper[5108]: E0104 00:11:23.725561 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:23 crc kubenswrapper[5108]: I0104 00:11:23.725917 5108 scope.go:117] "RemoveContainer" containerID="b0f1571354032f22b3cd6e7c486cd35896f2ac470b5058a9b8574e1b4db51757" Jan 04 00:11:23 crc kubenswrapper[5108]: E0104 00:11:23.726274 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:11:23 crc kubenswrapper[5108]: E0104 00:11:23.735193 5108 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18875ea6a82776b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18875ea6a82776b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:10:48.644949682 +0000 UTC m=+22.633514768,LastTimestamp:2026-01-04 00:11:23.726176479 +0000 UTC m=+57.714741575,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:11:24 crc kubenswrapper[5108]: E0104 00:11:24.044126 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 04 00:11:24 crc kubenswrapper[5108]: I0104 00:11:24.353870 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:25 crc kubenswrapper[5108]: I0104 00:11:25.353989 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:26 crc kubenswrapper[5108]: I0104 00:11:26.353460 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:26 crc kubenswrapper[5108]: E0104 00:11:26.521083 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 04 00:11:27 crc kubenswrapper[5108]: I0104 00:11:27.357333 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:27 crc kubenswrapper[5108]: I0104 00:11:27.910757 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:27 crc kubenswrapper[5108]: I0104 00:11:27.913483 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:27 crc kubenswrapper[5108]: I0104 00:11:27.913646 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:27 crc kubenswrapper[5108]: I0104 00:11:27.913773 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:27 crc kubenswrapper[5108]: I0104 00:11:27.913920 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:11:27 crc kubenswrapper[5108]: E0104 00:11:27.980156 5108 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 04 00:11:28 crc kubenswrapper[5108]: I0104 00:11:28.353259 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:29 crc kubenswrapper[5108]: I0104 00:11:29.354380 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:30 crc kubenswrapper[5108]: I0104 00:11:30.354409 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:31 crc kubenswrapper[5108]: E0104 00:11:31.051230 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 04 00:11:31 crc kubenswrapper[5108]: I0104 00:11:31.354794 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:32 crc kubenswrapper[5108]: I0104 00:11:32.354521 5108 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 04 00:11:32 crc kubenswrapper[5108]: I0104 00:11:32.622136 5108 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-w72nm" Jan 04 00:11:32 crc kubenswrapper[5108]: I0104 00:11:32.629937 5108 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-w72nm" Jan 04 00:11:32 crc kubenswrapper[5108]: I0104 00:11:32.711118 5108 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 04 00:11:33 crc kubenswrapper[5108]: I0104 00:11:33.243667 5108 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 04 00:11:33 crc kubenswrapper[5108]: I0104 00:11:33.631952 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-03 00:06:32 +0000 UTC" deadline="2026-01-28 17:27:36.520724395 +0000 UTC" Jan 04 00:11:33 crc kubenswrapper[5108]: I0104 00:11:33.632012 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="593h16m2.888717921s" Jan 04 00:11:34 crc kubenswrapper[5108]: I0104 00:11:34.980463 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:34 crc kubenswrapper[5108]: I0104 00:11:34.981595 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:34 crc kubenswrapper[5108]: I0104 00:11:34.981655 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:34 crc kubenswrapper[5108]: I0104 00:11:34.981672 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:34 crc kubenswrapper[5108]: I0104 00:11:34.981879 5108 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 04 00:11:34 crc kubenswrapper[5108]: I0104 00:11:34.992914 5108 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 04 00:11:34 crc kubenswrapper[5108]: I0104 00:11:34.993398 5108 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 04 00:11:34 crc kubenswrapper[5108]: E0104 00:11:34.993467 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.003888 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.003954 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.003974 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.004002 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.004019 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:35Z","lastTransitionTime":"2026-01-04T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.022312 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.031157 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.031259 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.031271 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.031295 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.031309 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:35Z","lastTransitionTime":"2026-01-04T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.045151 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.054029 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.054097 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.054112 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.054138 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.054154 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:35Z","lastTransitionTime":"2026-01-04T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.067356 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.076254 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.076329 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.076344 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.076373 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:35 crc kubenswrapper[5108]: I0104 00:11:35.076391 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:35Z","lastTransitionTime":"2026-01-04T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.089494 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.089653 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.089687 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.190292 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.290768 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.391234 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.492166 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.593388 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.694378 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.795551 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.896164 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:35 crc kubenswrapper[5108]: E0104 00:11:35.996351 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.096697 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.196917 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.297604 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.397779 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.448937 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.450098 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.450162 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.450172 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.451091 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.451383 5108 scope.go:117] "RemoveContainer" containerID="b0f1571354032f22b3cd6e7c486cd35896f2ac470b5058a9b8574e1b4db51757" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.498383 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.522835 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.598753 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.699100 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.799703 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.804336 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.806052 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e"} Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.806348 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.806949 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.807009 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:36 crc kubenswrapper[5108]: I0104 00:11:36.807024 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.807493 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:36 crc kubenswrapper[5108]: E0104 00:11:36.900066 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.000558 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.101588 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.202656 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.302835 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.403235 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.503517 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.603835 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.704041 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.804885 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.809994 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.810439 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.812347 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e" exitCode=255 Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.812398 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e"} Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.812437 5108 scope.go:117] "RemoveContainer" containerID="b0f1571354032f22b3cd6e7c486cd35896f2ac470b5058a9b8574e1b4db51757" Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.812653 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.813367 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.813396 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.813407 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.813788 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:37 crc kubenswrapper[5108]: I0104 00:11:37.814017 5108 scope.go:117] "RemoveContainer" containerID="001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.814251 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:11:37 crc kubenswrapper[5108]: E0104 00:11:37.905633 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.006520 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.107239 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.207591 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.308758 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.409488 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.509776 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.609959 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.710679 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.811710 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:38 crc kubenswrapper[5108]: I0104 00:11:38.822931 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 04 00:11:38 crc kubenswrapper[5108]: E0104 00:11:38.912290 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.013415 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.114178 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.214754 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.315175 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.415876 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.516482 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.616857 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.717228 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.817646 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:39 crc kubenswrapper[5108]: E0104 00:11:39.918772 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.018946 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.119974 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.220290 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.320826 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.421555 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.521991 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.622570 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.722783 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: I0104 00:11:40.729057 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:11:40 crc kubenswrapper[5108]: I0104 00:11:40.729431 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:40 crc kubenswrapper[5108]: I0104 00:11:40.730684 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:40 crc kubenswrapper[5108]: I0104 00:11:40.730739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:40 crc kubenswrapper[5108]: I0104 00:11:40.730753 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.731456 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:40 crc kubenswrapper[5108]: I0104 00:11:40.731776 5108 scope.go:117] "RemoveContainer" containerID="001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.732034 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.823036 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:40 crc kubenswrapper[5108]: E0104 00:11:40.924100 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.025160 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.126138 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.226494 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.327001 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.428012 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.528596 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.628740 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.729332 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.829535 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:41 crc kubenswrapper[5108]: E0104 00:11:41.929746 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.029901 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.130253 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.231067 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.331254 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.432315 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.533310 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.633571 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.734309 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.835435 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:42 crc kubenswrapper[5108]: E0104 00:11:42.939878 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.040416 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.141435 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.242455 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.343325 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.443880 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.544645 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.645665 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.746639 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.846829 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:43 crc kubenswrapper[5108]: E0104 00:11:43.947591 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.048363 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.149566 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.250562 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.351430 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.452268 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.552777 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.653104 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.753704 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.854096 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:44 crc kubenswrapper[5108]: E0104 00:11:44.954657 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.055500 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.156595 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.176929 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.182559 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.182612 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.182624 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.182641 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.182650 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:45Z","lastTransitionTime":"2026-01-04T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.193257 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.201680 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.201746 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.201762 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.201785 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.201803 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:45Z","lastTransitionTime":"2026-01-04T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.212956 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.222008 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.222079 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.222099 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.222123 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.222142 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:45Z","lastTransitionTime":"2026-01-04T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.233309 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.241827 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.241875 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.241898 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.241916 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:45 crc kubenswrapper[5108]: I0104 00:11:45.241928 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:45Z","lastTransitionTime":"2026-01-04T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.252859 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.253047 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.257782 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.358475 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.458768 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.559556 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.660101 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.760267 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.860944 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:45 crc kubenswrapper[5108]: E0104 00:11:45.961145 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.062361 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.163004 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.264220 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.365331 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.466067 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.523406 5108 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.566999 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.667251 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.768249 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: I0104 00:11:46.807171 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:11:46 crc kubenswrapper[5108]: I0104 00:11:46.807589 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:46 crc kubenswrapper[5108]: I0104 00:11:46.808623 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:46 crc kubenswrapper[5108]: I0104 00:11:46.808669 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:46 crc kubenswrapper[5108]: I0104 00:11:46.808682 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.809193 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:46 crc kubenswrapper[5108]: I0104 00:11:46.809479 5108 scope.go:117] "RemoveContainer" containerID="001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.809774 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.868890 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:46 crc kubenswrapper[5108]: E0104 00:11:46.970082 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.070641 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: I0104 00:11:47.103841 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.171684 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.272045 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.372614 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.473237 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.574321 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.675315 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.776285 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.876808 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:47 crc kubenswrapper[5108]: E0104 00:11:47.977923 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.079123 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.179509 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.279926 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.380381 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.480534 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.580952 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.681124 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.781698 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.882443 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:48 crc kubenswrapper[5108]: E0104 00:11:48.982576 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.083129 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.184368 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.284538 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.384885 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.485249 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.585434 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.685897 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.786327 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.886534 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:49 crc kubenswrapper[5108]: E0104 00:11:49.987031 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.087761 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.188940 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.290142 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.390847 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.491718 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.592684 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.693802 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.794721 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.895106 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:50 crc kubenswrapper[5108]: E0104 00:11:50.995668 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:51 crc kubenswrapper[5108]: E0104 00:11:51.095925 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:51 crc kubenswrapper[5108]: E0104 00:11:51.197123 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:51 crc kubenswrapper[5108]: E0104 00:11:51.298249 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:51 crc kubenswrapper[5108]: E0104 00:11:51.399040 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:51 crc kubenswrapper[5108]: E0104 00:11:51.499249 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:51 crc kubenswrapper[5108]: E0104 00:11:51.599777 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:51 crc kubenswrapper[5108]: E0104 00:11:51.700188 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:51 crc kubenswrapper[5108]: E0104 00:11:51.800657 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:51 crc kubenswrapper[5108]: E0104 00:11:51.901742 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.002919 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.103999 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.204889 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.305119 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.405611 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: I0104 00:11:52.448378 5108 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 04 00:11:52 crc kubenswrapper[5108]: I0104 00:11:52.449773 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:52 crc kubenswrapper[5108]: I0104 00:11:52.449834 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:52 crc kubenswrapper[5108]: I0104 00:11:52.449846 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.450395 5108 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.506248 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.606476 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.706776 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.807858 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: E0104 00:11:52.908749 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:52 crc kubenswrapper[5108]: I0104 00:11:52.928217 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.009747 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.110401 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.210922 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.311551 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.412047 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.512665 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.613374 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.714627 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.815129 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:53 crc kubenswrapper[5108]: E0104 00:11:53.916524 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.017091 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.117763 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.218726 5108 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.232851 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.279889 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.295308 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.321316 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.321385 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.321405 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.321431 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.321449 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:54Z","lastTransitionTime":"2026-01-04T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.393309 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.393407 5108 apiserver.go:52] "Watching apiserver" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.399110 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.399520 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-nhl4w","openshift-image-registry/node-ca-7vbfj","openshift-multus/network-metrics-daemon-mlfqf","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-dns/node-resolver-54hgz","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/multus-additional-cni-plugins-7kzr9","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/machine-config-daemon-njl5v","openshift-multus/multus-rzs5n","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5"] Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.401769 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.403273 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.403598 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.403810 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.403977 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.404593 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.404756 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.405844 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.407110 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.407273 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.407330 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.407594 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.409477 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.411624 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.412546 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.412556 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.413695 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.415302 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.423970 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.424026 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.424037 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.424058 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.424070 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:54Z","lastTransitionTime":"2026-01-04T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.432062 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.444677 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.459065 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.473091 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.482539 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.485054 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.486505 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.486690 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.486989 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.487457 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.489155 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.489220 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.494798 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.496468 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.500590 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.500905 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.501008 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.501375 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.506754 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.506872 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.510140 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.512947 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.513817 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.514136 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.514356 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.514403 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.526615 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.526682 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.526698 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.526727 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.526744 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:54Z","lastTransitionTime":"2026-01-04T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.527480 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.539191 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.552491 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.557577 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1304679c-1853-474c-9796-e64e919305dd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.557626 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.557716 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.557999 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558043 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558075 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558105 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/1304679c-1853-474c-9796-e64e919305dd-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558134 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbvp\" (UniqueName: \"kubernetes.io/projected/1304679c-1853-474c-9796-e64e919305dd-kube-api-access-gwbvp\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558164 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558192 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558251 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-cnibin\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558283 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558307 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558333 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-system-cni-dir\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.558376 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1304679c-1853-474c-9796-e64e919305dd-cni-binary-copy\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.559194 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.559258 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.559285 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.559348 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.559413 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.559514 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:55.059487009 +0000 UTC m=+89.048052095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.559536 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-os-release\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.559553 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.559561 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.559588 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.559608 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:55.059594712 +0000 UTC m=+89.048159808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.565763 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.571741 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.571770 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.571784 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.571885 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:55.071859175 +0000 UTC m=+89.060424261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.572185 5108 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.574042 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.574065 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.574075 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.574161 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:55.074151518 +0000 UTC m=+89.062716604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.574520 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.574804 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.575256 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.579064 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.580693 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.581757 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.582528 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.582547 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.582587 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.582862 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.583043 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.583151 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.583597 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.583769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.583806 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.585911 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.587697 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.587830 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.595580 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.595810 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.603302 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.606512 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.607508 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.607563 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.607760 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.611491 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.612116 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.612117 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.612932 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.613149 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.613403 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.613750 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.615035 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.619847 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.629018 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.629071 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.629099 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.629122 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.629140 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:54Z","lastTransitionTime":"2026-01-04T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.634784 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1304679c-1853-474c-9796-e64e919305dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7kzr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.647708 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-54hgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z79d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54hgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.658177 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7vbfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c974595e-d4c8-4c12-975a-2adb13a4c399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6x7fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7vbfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.660355 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-cni-dir\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.660430 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-var-lib-kubelet\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.660570 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-system-cni-dir\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.660652 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-system-cni-dir\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.660741 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.660788 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f377d71c-c91f-4a27-8276-7e06263de9f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.660863 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-os-release\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.660886 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ae5be4c-02db-4fcd-81dc-a86584c36ef5-hosts-file\") pod \"node-resolver-54hgz\" (UID: \"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\") " pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.660961 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.661121 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c974595e-d4c8-4c12-975a-2adb13a4c399-serviceca\") pod \"node-ca-7vbfj\" (UID: \"c974595e-d4c8-4c12-975a-2adb13a4c399\") " pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.661401 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.661550 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1304679c-1853-474c-9796-e64e919305dd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.661703 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f377d71c-c91f-4a27-8276-7e06263de9f6-proxy-tls\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.661579 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.661812 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-cnibin\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.661933 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-socket-dir-parent\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662068 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbvp\" (UniqueName: \"kubernetes.io/projected/1304679c-1853-474c-9796-e64e919305dd-kube-api-access-gwbvp\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662191 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-system-cni-dir\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662334 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgrq\" (UniqueName: \"kubernetes.io/projected/f377d71c-c91f-4a27-8276-7e06263de9f6-kube-api-access-7xgrq\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662445 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6xp\" (UniqueName: \"kubernetes.io/projected/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-kube-api-access-nj6xp\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662552 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-cnibin\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662658 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-cni-binary-copy\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662771 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-var-lib-cni-multus\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662843 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1304679c-1853-474c-9796-e64e919305dd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-cnibin\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662885 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ae5be4c-02db-4fcd-81dc-a86584c36ef5-tmp-dir\") pod \"node-resolver-54hgz\" (UID: \"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\") " pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.662952 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z79d4\" (UniqueName: \"kubernetes.io/projected/4ae5be4c-02db-4fcd-81dc-a86584c36ef5-kube-api-access-z79d4\") pod \"node-resolver-54hgz\" (UID: \"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\") " pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663040 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1304679c-1853-474c-9796-e64e919305dd-cni-binary-copy\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663104 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-run-netns\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663135 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-var-lib-cni-bin\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663251 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-etc-kubernetes\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663281 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-hostroot\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663340 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c974595e-d4c8-4c12-975a-2adb13a4c399-host\") pod \"node-ca-7vbfj\" (UID: \"c974595e-d4c8-4c12-975a-2adb13a4c399\") " pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663403 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-run-k8s-cni-cncf-io\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663502 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-os-release\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663676 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663802 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1304679c-1853-474c-9796-e64e919305dd-os-release\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663899 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663564 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1304679c-1853-474c-9796-e64e919305dd-cni-binary-copy\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.663813 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.664136 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f377d71c-c91f-4a27-8276-7e06263de9f6-rootfs\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.664263 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-run-multus-certs\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.664434 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntvl\" (UniqueName: \"kubernetes.io/projected/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-kube-api-access-zntvl\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.664551 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-conf-dir\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.664646 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x7fv\" (UniqueName: \"kubernetes.io/projected/c974595e-d4c8-4c12-975a-2adb13a4c399-kube-api-access-6x7fv\") pod \"node-ca-7vbfj\" (UID: \"c974595e-d4c8-4c12-975a-2adb13a4c399\") " pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.664768 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/1304679c-1853-474c-9796-e64e919305dd-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.664881 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-daemon-config\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.666605 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/1304679c-1853-474c-9796-e64e919305dd-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.673265 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732792c7-3389-4b84-88bd-7207a86bf590\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b4eb4e10456fad30e3a03344ec2affe56bf2b509b098d5b2b3e0d405875b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4871dd57f0ecd21f2d7f2b64f2493a0612dd77b89b0feeff7852b3ea1421b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://255667ec678133d539daab501a5b98a62289ce5d0229da32b3582e57ad5a5c40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28b887516a54da7ea3f035c2831e5d2ceef4487d4328fb87020325e4818d991f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.681186 5108 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.682338 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbvp\" (UniqueName: \"kubernetes.io/projected/1304679c-1853-474c-9796-e64e919305dd-kube-api-access-gwbvp\") pod \"multus-additional-cni-plugins-7kzr9\" (UID: \"1304679c-1853-474c-9796-e64e919305dd\") " pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.689468 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.697763 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.699417 5108 scope.go:117] "RemoveContainer" containerID="001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.699743 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.704396 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.718480 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-rzs5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rzs5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.722218 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.730835 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.732140 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2v7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2v7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-d8pjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.736875 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.736926 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.736938 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.736958 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.736968 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:54Z","lastTransitionTime":"2026-01-04T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:54 crc kubenswrapper[5108]: W0104 00:11:54.749744 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-20359d6663004a95829fc90198e2e38faa3d30259e39aa56a71604a4a120bbf1 WatchSource:0}: Error finding container 20359d6663004a95829fc90198e2e38faa3d30259e39aa56a71604a4a120bbf1: Status 404 returned error can't find the container with id 20359d6663004a95829fc90198e2e38faa3d30259e39aa56a71604a4a120bbf1 Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.755800 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6220c537-1e01-468c-ade3-4489ff45c4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://5faa5d936dcf21f3645dc93fead84972db7b350c39f1ae1f4ba5ddb7af9d0f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5600c53dc483245092b5d86d14ce5cd512c39f5cde0f47f32ba2d68c92d05cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9eb8b844800fe1d272ec5c719cd0db94d9da63d845e436f1afbafda9fcf5c3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2ea94b55e12c0f25dcd9c205306a29a282c096d4bbf535c91a6b5cc419be53f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4770a34a9314b95470ad00e2ab4b5d3dc56c2a21e54866222ebe78dcd2f04ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.765438 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.765504 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.765530 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.765560 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766193 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766303 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766351 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766453 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766502 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766537 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766647 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766684 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766761 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766769 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766842 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766905 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766953 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.766934 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767076 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767298 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767324 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767339 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767366 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767403 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767561 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767603 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767633 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767656 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767681 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767701 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767725 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767749 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767774 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767804 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767831 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767861 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767886 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767883 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.767914 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768029 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768047 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768067 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768043 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768098 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768119 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768141 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768140 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768705 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768756 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768786 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768825 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768857 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768886 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768920 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768958 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.768991 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.769028 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.769059 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.769089 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.769400 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.769697 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.769901 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.769947 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.769965 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.770139 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.770254 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.770366 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.770402 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.770538 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.770708 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.770744 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.770909 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.771427 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.771666 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.771683 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.771942 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.772278 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.772301 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.772321 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.772373 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.772503 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.772714 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.772796 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.773081 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.773168 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.773354 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.773367 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.773427 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.773623 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.773676 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.773783 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.773840 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:11:55.273811276 +0000 UTC m=+89.262376362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.773917 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774000 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774033 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774075 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774329 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774389 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774426 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774462 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774502 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774535 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774423 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774560 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774536 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774589 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774615 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774639 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774662 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774690 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774699 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774716 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774805 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.774854 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775387 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775433 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775484 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775524 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775552 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775588 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775621 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775656 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775684 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775716 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775748 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775778 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775814 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775860 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775897 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775925 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775964 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775996 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776028 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776067 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776105 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776145 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776177 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776228 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776342 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776371 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775116 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775361 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775367 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775500 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775782 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775795 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.775993 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776874 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776850 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777114 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777153 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777121 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777332 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777396 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777400 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777432 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777458 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777576 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777586 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777615 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777680 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777731 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777863 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777912 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777922 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777940 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.777953 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778076 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778094 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778128 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778193 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778252 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778245 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778286 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778315 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778339 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778357 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778367 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776106 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776346 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776590 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776591 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.776602 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778506 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778707 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778869 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778903 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778996 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779154 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779196 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779219 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779480 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.778414 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779510 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779547 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779577 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779648 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779675 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779697 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779717 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779736 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779760 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779780 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779803 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779851 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779876 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779898 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779920 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779942 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779977 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780001 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780024 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780051 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780078 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780099 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780120 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780141 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780160 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780182 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780213 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780233 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780251 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780273 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780299 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780319 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780343 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780366 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780390 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780434 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780483 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.782275 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.782918 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.782967 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783008 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783035 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783067 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783100 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783126 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783151 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783171 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783211 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783238 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783258 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783279 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779727 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783311 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779784 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.779907 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780050 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780279 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780367 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780432 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780627 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.780854 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.781231 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.781290 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.781391 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.781491 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.781732 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.781759 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.781819 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.781969 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.782341 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.782351 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784284 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.782391 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.782695 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.782712 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783040 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783106 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783116 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784396 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783142 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783296 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784187 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784523 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783329 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784683 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784703 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784799 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784833 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784843 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784870 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784903 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784928 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784955 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.784979 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785008 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785008 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785033 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785057 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785082 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785097 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785111 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785190 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785234 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785263 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785264 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785285 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785361 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785393 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785400 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785401 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785425 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785518 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785564 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785791 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785826 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.785895 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.783752 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.786017 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.786123 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.786310 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.786441 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.786505 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.786813 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.786905 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787107 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787180 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787217 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787232 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787282 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787308 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787315 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787335 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787359 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787383 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787453 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787477 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787496 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787517 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787539 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787554 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787565 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787583 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787599 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787623 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787643 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787667 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787691 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787717 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787734 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787756 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787769 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787782 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787803 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787825 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787848 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787880 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787901 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787924 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787925 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787949 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787967 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787973 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.787992 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788045 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788070 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788086 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788120 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788144 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788169 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788282 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788307 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788362 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788363 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788390 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788418 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788409 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788435 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788450 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788507 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788536 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788637 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-cni-dir\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788666 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-var-lib-kubelet\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788700 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-slash\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788721 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph7rp\" (UniqueName: \"kubernetes.io/projected/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-kube-api-access-ph7rp\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788774 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-netns\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788803 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-var-lib-openvswitch\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788827 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-log-socket\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788862 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f377d71c-c91f-4a27-8276-7e06263de9f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788885 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-os-release\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788907 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ae5be4c-02db-4fcd-81dc-a86584c36ef5-hosts-file\") pod \"node-resolver-54hgz\" (UID: \"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\") " pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788928 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c974595e-d4c8-4c12-975a-2adb13a4c399-serviceca\") pod \"node-ca-7vbfj\" (UID: \"c974595e-d4c8-4c12-975a-2adb13a4c399\") " pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788947 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-ovn\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788972 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.788995 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789016 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2v7q\" (UniqueName: \"kubernetes.io/projected/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-kube-api-access-z2v7q\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789042 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f377d71c-c91f-4a27-8276-7e06263de9f6-proxy-tls\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789062 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-cnibin\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789091 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-socket-dir-parent\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789120 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-env-overrides\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789180 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-system-cni-dir\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789226 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovn-node-metrics-cert\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789037 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789258 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789255 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7xgrq\" (UniqueName: \"kubernetes.io/projected/f377d71c-c91f-4a27-8276-7e06263de9f6-kube-api-access-7xgrq\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789357 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6xp\" (UniqueName: \"kubernetes.io/projected/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-kube-api-access-nj6xp\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789391 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789412 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789449 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-bin\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789446 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789839 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789559 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.789874 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790106 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790259 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790396 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790462 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790587 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790647 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-netd\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790736 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-cni-binary-copy\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790769 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-var-lib-cni-multus\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790833 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ae5be4c-02db-4fcd-81dc-a86584c36ef5-tmp-dir\") pod \"node-resolver-54hgz\" (UID: \"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\") " pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790863 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z79d4\" (UniqueName: \"kubernetes.io/projected/4ae5be4c-02db-4fcd-81dc-a86584c36ef5-kube-api-access-z79d4\") pod \"node-resolver-54hgz\" (UID: \"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\") " pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790895 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-kubelet\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.791191 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-run-netns\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.792008 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-var-lib-cni-bin\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790663 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.792105 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790672 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790802 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.791013 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.791898 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.792897 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.792919 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.793216 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.793335 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.793587 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.790107 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.793612 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.793633 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.793997 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.794231 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.794266 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.794321 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.794561 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.796246 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.796457 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.796843 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.796866 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.796874 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.796993 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.797055 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.797087 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.797657 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.797980 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798046 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.797994 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798296 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798338 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-run-netns\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798454 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-cni-dir\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798680 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-etc-kubernetes\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798739 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-openvswitch\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798771 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-node-log\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798799 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-script-lib\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798831 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-hostroot\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798829 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-socket-dir-parent\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798878 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-system-cni-dir\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798422 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798639 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.798704 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.799340 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.799650 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800010 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c974595e-d4c8-4c12-975a-2adb13a4c399-host\") pod \"node-ca-7vbfj\" (UID: \"c974595e-d4c8-4c12-975a-2adb13a4c399\") " pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800058 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-etc-openvswitch\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800087 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800150 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-run-k8s-cni-cncf-io\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800176 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-systemd-units\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800242 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-etc-kubernetes\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800260 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-var-lib-cni-bin\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800285 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c974595e-d4c8-4c12-975a-2adb13a4c399-host\") pod \"node-ca-7vbfj\" (UID: \"c974595e-d4c8-4c12-975a-2adb13a4c399\") " pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800311 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-run-k8s-cni-cncf-io\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800336 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800376 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800376 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-var-lib-cni-multus\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800408 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-hostroot\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800427 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800508 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-os-release\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800505 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-config\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800573 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800580 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.800582 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800571 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ae5be4c-02db-4fcd-81dc-a86584c36ef5-hosts-file\") pod \"node-resolver-54hgz\" (UID: \"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\") " pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: E0104 00:11:54.800678 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs podName:6feab616-6edc-4a90-8ee9-f5ae1c2e80c5 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:55.300655666 +0000 UTC m=+89.289220752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs") pod "network-metrics-daemon-mlfqf" (UID: "6feab616-6edc-4a90-8ee9-f5ae1c2e80c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800672 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f377d71c-c91f-4a27-8276-7e06263de9f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800720 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-cnibin\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800758 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f377d71c-c91f-4a27-8276-7e06263de9f6-rootfs\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800804 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-run-multus-certs\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800853 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-run-multus-certs\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800868 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.800926 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f377d71c-c91f-4a27-8276-7e06263de9f6-rootfs\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801049 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-host-var-lib-kubelet\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801084 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zntvl\" (UniqueName: \"kubernetes.io/projected/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-kube-api-access-zntvl\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801133 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-conf-dir\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801182 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-conf-dir\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801182 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6x7fv\" (UniqueName: \"kubernetes.io/projected/c974595e-d4c8-4c12-975a-2adb13a4c399-kube-api-access-6x7fv\") pod \"node-ca-7vbfj\" (UID: \"c974595e-d4c8-4c12-975a-2adb13a4c399\") " pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801354 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-daemon-config\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801391 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-systemd\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801705 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801728 5108 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801738 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801748 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801758 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801770 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801780 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801791 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801801 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801814 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801828 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801841 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801850 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801859 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801869 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801881 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801892 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801904 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801916 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801930 5108 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801943 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801956 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801969 5108 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801980 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.801991 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802000 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802012 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802023 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802034 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802045 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802055 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802064 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802075 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802084 5108 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802093 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802101 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802112 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802122 5108 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802130 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802141 5108 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802151 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802160 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802171 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802183 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802211 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802224 5108 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802235 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802269 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802279 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802288 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802297 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802306 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802316 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802324 5108 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802335 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802345 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802354 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802363 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802373 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802384 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802399 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802412 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802423 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802434 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802443 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802453 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802463 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802473 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802482 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802478 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c974595e-d4c8-4c12-975a-2adb13a4c399-serviceca\") pod \"node-ca-7vbfj\" (UID: \"c974595e-d4c8-4c12-975a-2adb13a4c399\") " pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802491 5108 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802553 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802565 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802578 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802591 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802604 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802621 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802637 5108 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802649 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802662 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802679 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802692 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802706 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802717 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802730 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802741 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802756 5108 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802769 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802780 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802795 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802808 5108 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802820 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802834 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802848 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802857 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802866 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802878 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802892 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802907 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802922 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802933 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802944 5108 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802957 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802968 5108 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802979 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.802991 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803002 5108 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803014 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803027 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803039 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803049 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803060 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803069 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803079 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803090 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803099 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803109 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803119 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803128 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803137 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803146 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803155 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803164 5108 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803173 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803183 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803193 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803229 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803241 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803250 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803259 5108 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803270 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803282 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803294 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803308 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803323 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803339 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803350 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803363 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803380 5108 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803398 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803411 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803424 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803466 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803494 5108 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803523 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803539 5108 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803638 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803678 5108 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803725 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803791 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.803833 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804019 5108 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804032 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804043 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804054 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804063 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804073 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804084 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804094 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804103 5108 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804112 5108 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804123 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804132 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804142 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804151 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804159 5108 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804168 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804178 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804187 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804228 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804239 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804248 5108 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804259 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804276 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804396 5108 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804411 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804423 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804435 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804443 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804574 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804586 5108 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804596 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804606 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804616 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804627 5108 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804637 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804647 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804656 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804665 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804674 5108 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804682 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804691 5108 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804701 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804709 5108 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804717 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804725 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804734 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.804743 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.805685 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.806366 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-multus-daemon-config\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.807748 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-cni-binary-copy\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.810712 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.810838 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.813524 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ae5be4c-02db-4fcd-81dc-a86584c36ef5-tmp-dir\") pod \"node-resolver-54hgz\" (UID: \"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\") " pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.813864 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.815727 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.817651 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xgrq\" (UniqueName: \"kubernetes.io/projected/f377d71c-c91f-4a27-8276-7e06263de9f6-kube-api-access-7xgrq\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.818277 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6xp\" (UniqueName: \"kubernetes.io/projected/8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23-kube-api-access-nj6xp\") pod \"multus-rzs5n\" (UID: \"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\") " pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: W0104 00:11:54.818322 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-47d5a203b581efb93396e8030f5bd7c46c71b58be3c47c5ce45342d4f04675b2 WatchSource:0}: Error finding container 47d5a203b581efb93396e8030f5bd7c46c71b58be3c47c5ce45342d4f04675b2: Status 404 returned error can't find the container with id 47d5a203b581efb93396e8030f5bd7c46c71b58be3c47c5ce45342d4f04675b2 Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.819067 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.819295 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.819418 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.819566 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.820097 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.820137 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.821115 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.822176 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x7fv\" (UniqueName: \"kubernetes.io/projected/c974595e-d4c8-4c12-975a-2adb13a4c399-kube-api-access-6x7fv\") pod \"node-ca-7vbfj\" (UID: \"c974595e-d4c8-4c12-975a-2adb13a4c399\") " pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.822119 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1304679c-1853-474c-9796-e64e919305dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7kzr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.822685 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.822964 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.823244 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7vbfj" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.824243 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.824720 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f377d71c-c91f-4a27-8276-7e06263de9f6-proxy-tls\") pod \"machine-config-daemon-njl5v\" (UID: \"f377d71c-c91f-4a27-8276-7e06263de9f6\") " pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.827157 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.827790 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.827800 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.827841 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.828190 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.828242 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z79d4\" (UniqueName: \"kubernetes.io/projected/4ae5be4c-02db-4fcd-81dc-a86584c36ef5-kube-api-access-z79d4\") pod \"node-resolver-54hgz\" (UID: \"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\") " pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.830166 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntvl\" (UniqueName: \"kubernetes.io/projected/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-kube-api-access-zntvl\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.838867 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.841616 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.841673 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.841687 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.841711 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.841727 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:54Z","lastTransitionTime":"2026-01-04T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.845385 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nhl4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.849972 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.859529 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e8ada1-12ff-4db8-92fa-aad0b162537b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b46597ace50f1479ce247dd96257545e8ebd89d91ea8d25b96566c802bc5770c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a126cd2de771b57582f22e51d037cc93cb4afd7c3d6afe7fce9b37e4386a8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2ba930f65c545366818d27dc41669dd09c8c81630dec8b3a9870c1bd42387201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a4534c19318d79d45b4218830f651d1cd0121733d43f00126b77092e620dbf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a4534c19318d79d45b4218830f651d1cd0121733d43f00126b77092e620dbf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.869171 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.870462 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-54hgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z79d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54hgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.873190 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"47d5a203b581efb93396e8030f5bd7c46c71b58be3c47c5ce45342d4f04675b2"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.874094 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerStarted","Data":"5b7c517b951dd4f3227c382b5cf8f46e24e6ca87a6f87c30f54fa026c62578ad"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.874865 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7vbfj" event={"ID":"c974595e-d4c8-4c12-975a-2adb13a4c399","Type":"ContainerStarted","Data":"17dc499b25267c80278853a37dbd60dadd988575cd7a5b9680436dc980628f68"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.876078 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"20359d6663004a95829fc90198e2e38faa3d30259e39aa56a71604a4a120bbf1"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.876972 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"3b184b053542f4eb72807f7d487de8d586a73794720ca7c2bbfd1c86e1db5403"} Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.882691 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7vbfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c974595e-d4c8-4c12-975a-2adb13a4c399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6x7fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7vbfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.893871 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c9cf99-7a0b-4178-aa66-b771307149c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://527b9b2ed8353f00600d3385d2dd27e109b87532fe919428fc3fcd303846c1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.905790 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.905885 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-systemd\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.905898 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.905915 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-slash\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.905941 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ph7rp\" (UniqueName: \"kubernetes.io/projected/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-kube-api-access-ph7rp\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.905974 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-systemd\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906015 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-netns\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906084 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-var-lib-openvswitch\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906104 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-log-socket\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906123 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-ovn\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906130 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-netns\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906148 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906167 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906180 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-log-socket\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906188 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2v7q\" (UniqueName: \"kubernetes.io/projected/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-kube-api-access-z2v7q\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906231 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-env-overrides\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906180 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-var-lib-openvswitch\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906241 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-ovn\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906010 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-slash\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.906399 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovn-node-metrics-cert\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907146 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907173 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907247 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-env-overrides\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907276 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-bin\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907252 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907305 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-bin\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907368 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-netd\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907396 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-netd\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907451 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-kubelet\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907484 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-openvswitch\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907509 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-node-log\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907535 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-script-lib\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907563 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-etc-openvswitch\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907590 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907619 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-systemd-units\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907645 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-openvswitch\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907687 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-kubelet\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907733 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-node-log\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907767 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-etc-openvswitch\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.907969 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-systemd-units\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.908041 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-config\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.908094 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.908489 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-script-lib\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909366 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-config\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909564 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909606 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909623 5108 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909637 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909654 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909679 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909692 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909704 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909717 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.909730 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910224 5108 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910238 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910253 5108 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910265 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910350 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910380 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910392 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910405 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910416 5108 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910428 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910439 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910459 5108 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910469 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910479 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.910492 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.912660 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.912654 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.915380 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovn-node-metrics-cert\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.915675 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.923732 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.925149 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rzs5n" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.925893 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.928749 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph7rp\" (UniqueName: \"kubernetes.io/projected/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-kube-api-access-ph7rp\") pod \"ovnkube-node-nhl4w\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.941012 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2v7q\" (UniqueName: \"kubernetes.io/projected/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-kube-api-access-z2v7q\") pod \"ovnkube-control-plane-57b78d8988-d8pjz\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.941482 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f377d71c-c91f-4a27-8276-7e06263de9f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xgrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xgrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-njl5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.941682 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.943855 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.943896 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.943909 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.943930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:54 crc kubenswrapper[5108]: I0104 00:11:54.943944 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:54Z","lastTransitionTime":"2026-01-04T00:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:54 crc kubenswrapper[5108]: W0104 00:11:54.984174 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f4ef11a_e50f_4ed2_88f5_8cb0eef1af23.slice/crio-96526bb13555fbe4683607b656ba1a440abe821e6fc9e95defdf2813d1434fb4 WatchSource:0}: Error finding container 96526bb13555fbe4683607b656ba1a440abe821e6fc9e95defdf2813d1434fb4: Status 404 returned error can't find the container with id 96526bb13555fbe4683607b656ba1a440abe821e6fc9e95defdf2813d1434fb4 Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.011737 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.048593 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.048678 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.048695 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.048719 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.048736 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.113271 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.113335 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.113414 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-54hgz" Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113500 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113521 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113573 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.113523 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113635 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113654 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113665 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.113702 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113506 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113760 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113913 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:56.113887421 +0000 UTC m=+90.102452507 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.113978 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:56.113941213 +0000 UTC m=+90.102506309 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.114018 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:56.114006645 +0000 UTC m=+90.102571941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.114217 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:56.114184369 +0000 UTC m=+90.102749455 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:55 crc kubenswrapper[5108]: W0104 00:11:55.139234 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ae5be4c_02db_4fcd_81dc_a86584c36ef5.slice/crio-2ec038c16db261e190a111fef4148e40ca0c66d5acb3a76b89d215126976d484 WatchSource:0}: Error finding container 2ec038c16db261e190a111fef4148e40ca0c66d5acb3a76b89d215126976d484: Status 404 returned error can't find the container with id 2ec038c16db261e190a111fef4148e40ca0c66d5acb3a76b89d215126976d484 Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.157736 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.157784 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.157794 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.157811 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.157824 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.234065 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.271692 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.271756 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.271766 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.271785 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.271799 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.290553 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.290627 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.290642 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.290664 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.290678 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.312848 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.317018 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.317315 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:11:56.317270241 +0000 UTC m=+90.305835327 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.317474 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.317538 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.317593 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.317622 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.317638 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.317712 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs podName:6feab616-6edc-4a90-8ee9-f5ae1c2e80c5 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:56.317689472 +0000 UTC m=+90.306254558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs") pod "network-metrics-daemon-mlfqf" (UID: "6feab616-6edc-4a90-8ee9-f5ae1c2e80c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.317647 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.317747 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.330595 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.336954 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.337005 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.337015 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.337034 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.337046 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: W0104 00:11:55.364783 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c95d1a3_7d43_48b4_afe6_dd3bf3b87dc6.slice/crio-80a86e31c3e4fac2b225a746ba153cced16ff4d887b302f70c8da3431dee0c21 WatchSource:0}: Error finding container 80a86e31c3e4fac2b225a746ba153cced16ff4d887b302f70c8da3431dee0c21: Status 404 returned error can't find the container with id 80a86e31c3e4fac2b225a746ba153cced16ff4d887b302f70c8da3431dee0c21 Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.370459 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.379935 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.379988 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.380001 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.380024 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.380037 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.391744 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.411825 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.411896 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.411920 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.411951 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.411968 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.430441 5108 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d5d783a5-a674-4781-98e0-72a073e00d58\\\",\\\"systemUUID\\\":\\\"b32cf431-599e-4ef4-b60f-ec5735cef856\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:55 crc kubenswrapper[5108]: E0104 00:11:55.430586 5108 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.432528 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.432581 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.432597 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.432622 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.432637 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.536000 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.536050 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.536061 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.536085 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.536101 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.638671 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.639290 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.639312 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.639334 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.639346 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.741999 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.742065 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.742075 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.742096 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.742111 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.844669 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.844731 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.844745 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.844767 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.844783 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:55Z","lastTransitionTime":"2026-01-04T00:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.900309 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7vbfj" event={"ID":"c974595e-d4c8-4c12-975a-2adb13a4c399","Type":"ContainerStarted","Data":"19cb153fdd72887c57559e18117a87798bb19ff0f5d8f78527d1f06fdfac9e88"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.908033 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-54hgz" event={"ID":"4ae5be4c-02db-4fcd-81dc-a86584c36ef5","Type":"ContainerStarted","Data":"f7b245ea007395f1e5ce0c2a5c198dec6ffc524d753b8a46dedbb88251ce88c6"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.908119 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-54hgz" event={"ID":"4ae5be4c-02db-4fcd-81dc-a86584c36ef5","Type":"ContainerStarted","Data":"2ec038c16db261e190a111fef4148e40ca0c66d5acb3a76b89d215126976d484"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.910993 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rzs5n" event={"ID":"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23","Type":"ContainerStarted","Data":"7992dd8c360b5fa59546180b8358b2f8950b3b1a60bddb47c4b085abd26fee5f"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.911056 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rzs5n" event={"ID":"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23","Type":"ContainerStarted","Data":"96526bb13555fbe4683607b656ba1a440abe821e6fc9e95defdf2813d1434fb4"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.917005 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerStarted","Data":"b16279618f52b1b2a8e55afef485f23e6bc086f31f3c1e59680047893f657fbe"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.922978 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.932968 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"cdcf52cacbc491094ef37159fd0f8c07c157589e5415f23c4ac8b78649bd47f2"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.933027 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"66c49dcd651576d83f3cec1e94594be05ce6eef4d8e7f85c16a8e7424e958d2a"} Jan 04 00:11:55 crc kubenswrapper[5108]: I0104 00:11:55.935286 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"5eb9f09184a1abd0e30a0470232ca9aa91b8e269818766d8b4fb8e5cb595ea58"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.021024 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.021560 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.021686 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.021797 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.021879 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.026168 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" event={"ID":"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6","Type":"ContainerStarted","Data":"afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.026278 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" event={"ID":"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6","Type":"ContainerStarted","Data":"80a86e31c3e4fac2b225a746ba153cced16ff4d887b302f70c8da3431dee0c21"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.026278 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.028783 5108 generic.go:358] "Generic (PLEG): container finished" podID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerID="f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318" exitCode=0 Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.028879 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.028923 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerStarted","Data":"0d9c5a8de15df6caaa824945872985cdd809b7a69873fa20ec6d08fedd59af7e"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.046636 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"224a19824a39bcd1811e5de78454eaa50abc9730908edc0cad1179670251e933"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.046752 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"94f4e2cbc916293b4e6676fb0b3fe4568b76f062b4ce243281ad611c1958954a"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.046774 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"14d46660fef2638a12ef6f465961978e080a282084969f2b3391f537c86fbf61"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.059779 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-rzs5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rzs5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.077185 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2v7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2v7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-d8pjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.113585 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6220c537-1e01-468c-ade3-4489ff45c4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://5faa5d936dcf21f3645dc93fead84972db7b350c39f1ae1f4ba5ddb7af9d0f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5600c53dc483245092b5d86d14ce5cd512c39f5cde0f47f32ba2d68c92d05cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9eb8b844800fe1d272ec5c719cd0db94d9da63d845e436f1afbafda9fcf5c3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2ea94b55e12c0f25dcd9c205306a29a282c096d4bbf535c91a6b5cc419be53f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4770a34a9314b95470ad00e2ab4b5d3dc56c2a21e54866222ebe78dcd2f04ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.141736 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.143353 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.141830 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.143408 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.143563 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.143612 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.143627 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.143683 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:58.143598586 +0000 UTC m=+92.132163682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.143724 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:58.143710458 +0000 UTC m=+92.132275544 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.142604 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.144044 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.144059 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.144184 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.144232 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.144247 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.144537 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.144559 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.144567 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.144618 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:58.144590113 +0000 UTC m=+92.133155199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.145001 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.145160 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:58.145133017 +0000 UTC m=+92.133698103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.161705 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.177797 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.191697 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.209820 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1304679c-1853-474c-9796-e64e919305dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7kzr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.236283 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nhl4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.247241 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.247300 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.247311 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.247332 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.247347 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.250582 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e8ada1-12ff-4db8-92fa-aad0b162537b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b46597ace50f1479ce247dd96257545e8ebd89d91ea8d25b96566c802bc5770c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a126cd2de771b57582f22e51d037cc93cb4afd7c3d6afe7fce9b37e4386a8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2ba930f65c545366818d27dc41669dd09c8c81630dec8b3a9870c1bd42387201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a4534c19318d79d45b4218830f651d1cd0121733d43f00126b77092e620dbf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a4534c19318d79d45b4218830f651d1cd0121733d43f00126b77092e620dbf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.270676 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1581284b-5ee5-493b-8401-025c4348876e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://7b7d5d310358a9b842de277978eebe04b3dd67697935a4e7331293c8f2ce2c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cf77409fe9a2a06b6cee539ab960b8ffe727a07751479e7c45e6314efc896193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b50433c05b4e9462bc1aeb26ab699177676176c7912e3f3701262c4c809e3cc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-04T00:11:37Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0104 00:11:37.205303 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0104 00:11:37.205595 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0104 00:11:37.206610 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-496936953/tls.crt::/tmp/serving-cert-496936953/tls.key\\\\\\\"\\\\nI0104 00:11:37.611539 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0104 00:11:37.615476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0104 00:11:37.615524 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0104 00:11:37.615568 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0104 00:11:37.615575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0104 00:11:37.620021 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0104 00:11:37.620062 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0104 00:11:37.620066 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0104 00:11:37.620070 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0104 00:11:37.620076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0104 00:11:37.620079 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0104 00:11:37.620081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0104 00:11:37.620229 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0104 00:11:37.623244 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-04T00:11:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d7a38395218096d15fda6992626e039e078f2bec25e625392f1b72f1fc46dcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://76bbcaf7c19eae97cabab72b1af9ee18fd88354943af8dab060b9ab39179242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76bbcaf7c19eae97cabab72b1af9ee18fd88354943af8dab060b9ab39179242a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.283034 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-54hgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z79d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54hgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.295745 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7vbfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c974595e-d4c8-4c12-975a-2adb13a4c399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://19cb153fdd72887c57559e18117a87798bb19ff0f5d8f78527d1f06fdfac9e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6x7fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7vbfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.312377 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c9cf99-7a0b-4178-aa66-b771307149c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://527b9b2ed8353f00600d3385d2dd27e109b87532fe919428fc3fcd303846c1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.324917 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.333135 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.348147 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f377d71c-c91f-4a27-8276-7e06263de9f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xgrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xgrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-njl5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.348337 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.348552 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:11:58.348506636 +0000 UTC m=+92.337071722 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.348684 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.348831 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.348904 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs podName:6feab616-6edc-4a90-8ee9-f5ae1c2e80c5 nodeName:}" failed. No retries permitted until 2026-01-04 00:11:58.348882397 +0000 UTC m=+92.337447663 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs") pod "network-metrics-daemon-mlfqf" (UID: "6feab616-6edc-4a90-8ee9-f5ae1c2e80c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.349358 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.349397 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.349409 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.349430 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.349452 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.364770 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732792c7-3389-4b84-88bd-7207a86bf590\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b4eb4e10456fad30e3a03344ec2affe56bf2b509b098d5b2b3e0d405875b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4871dd57f0ecd21f2d7f2b64f2493a0612dd77b89b0feeff7852b3ea1421b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://255667ec678133d539daab501a5b98a62289ce5d0229da32b3582e57ad5a5c40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28b887516a54da7ea3f035c2831e5d2ceef4487d4328fb87020325e4818d991f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.379492 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732792c7-3389-4b84-88bd-7207a86bf590\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b4eb4e10456fad30e3a03344ec2affe56bf2b509b098d5b2b3e0d405875b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4871dd57f0ecd21f2d7f2b64f2493a0612dd77b89b0feeff7852b3ea1421b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://255667ec678133d539daab501a5b98a62289ce5d0229da32b3582e57ad5a5c40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28b887516a54da7ea3f035c2831e5d2ceef4487d4328fb87020325e4818d991f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.394150 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.407253 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdcf52cacbc491094ef37159fd0f8c07c157589e5415f23c4ac8b78649bd47f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66c49dcd651576d83f3cec1e94594be05ce6eef4d8e7f85c16a8e7424e958d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.420412 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-rzs5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://7992dd8c360b5fa59546180b8358b2f8950b3b1a60bddb47c4b085abd26fee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rzs5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.431865 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2v7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2v7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-d8pjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.452406 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.452455 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.452466 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.452484 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.452498 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.454697 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6220c537-1e01-468c-ade3-4489ff45c4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://5faa5d936dcf21f3645dc93fead84972db7b350c39f1ae1f4ba5ddb7af9d0f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5600c53dc483245092b5d86d14ce5cd512c39f5cde0f47f32ba2d68c92d05cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9eb8b844800fe1d272ec5c719cd0db94d9da63d845e436f1afbafda9fcf5c3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2ea94b55e12c0f25dcd9c205306a29a282c096d4bbf535c91a6b5cc419be53f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4770a34a9314b95470ad00e2ab4b5d3dc56c2a21e54866222ebe78dcd2f04ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.469648 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5eb9f09184a1abd0e30a0470232ca9aa91b8e269818766d8b4fb8e5cb595ea58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.482129 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.498113 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.514369 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1304679c-1853-474c-9796-e64e919305dd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b16279618f52b1b2a8e55afef485f23e6bc086f31f3c1e59680047893f657fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7kzr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.533954 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph7rp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nhl4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.550866 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e8ada1-12ff-4db8-92fa-aad0b162537b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b46597ace50f1479ce247dd96257545e8ebd89d91ea8d25b96566c802bc5770c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a126cd2de771b57582f22e51d037cc93cb4afd7c3d6afe7fce9b37e4386a8de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2ba930f65c545366818d27dc41669dd09c8c81630dec8b3a9870c1bd42387201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a4534c19318d79d45b4218830f651d1cd0121733d43f00126b77092e620dbf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a4534c19318d79d45b4218830f651d1cd0121733d43f00126b77092e620dbf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.556005 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.556046 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.556057 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.556073 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.556086 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.557094 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.557287 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.557779 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.557849 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.557949 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.558025 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.558234 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:11:56 crc kubenswrapper[5108]: E0104 00:11:56.558315 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.562107 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.563130 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.634452 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1581284b-5ee5-493b-8401-025c4348876e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://7b7d5d310358a9b842de277978eebe04b3dd67697935a4e7331293c8f2ce2c12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cf77409fe9a2a06b6cee539ab960b8ffe727a07751479e7c45e6314efc896193\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b50433c05b4e9462bc1aeb26ab699177676176c7912e3f3701262c4c809e3cc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-04T00:11:37Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0104 00:11:37.205303 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0104 00:11:37.205595 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0104 00:11:37.206610 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-496936953/tls.crt::/tmp/serving-cert-496936953/tls.key\\\\\\\"\\\\nI0104 00:11:37.611539 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0104 00:11:37.615476 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0104 00:11:37.615524 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0104 00:11:37.615568 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0104 00:11:37.615575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0104 00:11:37.620021 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0104 00:11:37.620062 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0104 00:11:37.620066 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0104 00:11:37.620070 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0104 00:11:37.620076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0104 00:11:37.620079 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0104 00:11:37.620081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0104 00:11:37.620229 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0104 00:11:37.623244 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-04T00:11:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2d7a38395218096d15fda6992626e039e078f2bec25e625392f1b72f1fc46dcb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://76bbcaf7c19eae97cabab72b1af9ee18fd88354943af8dab060b9ab39179242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76bbcaf7c19eae97cabab72b1af9ee18fd88354943af8dab060b9ab39179242a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.644182 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.651816 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.654341 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-54hgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7b245ea007395f1e5ce0c2a5c198dec6ffc524d753b8a46dedbb88251ce88c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z79d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54hgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.658494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.658547 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.658560 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.658582 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.658597 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.666397 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7vbfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c974595e-d4c8-4c12-975a-2adb13a4c399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://19cb153fdd72887c57559e18117a87798bb19ff0f5d8f78527d1f06fdfac9e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6x7fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7vbfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.678894 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c9cf99-7a0b-4178-aa66-b771307149c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://527b9b2ed8353f00600d3385d2dd27e109b87532fe919428fc3fcd303846c1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.680840 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.688902 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.690788 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.694577 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.695263 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.697434 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.708284 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.708836 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.710117 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.723571 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.724626 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.726359 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f377d71c-c91f-4a27-8276-7e06263de9f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://224a19824a39bcd1811e5de78454eaa50abc9730908edc0cad1179670251e933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xgrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://94f4e2cbc916293b4e6676fb0b3fe4568b76f062b4ce243281ad611c1958954a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xgrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-njl5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.726476 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.726941 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.732380 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.733321 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.739530 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-54hgz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ae5be4c-02db-4fcd-81dc-a86584c36ef5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7b245ea007395f1e5ce0c2a5c198dec6ffc524d753b8a46dedbb88251ce88c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z79d4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-54hgz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.751494 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7vbfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c974595e-d4c8-4c12-975a-2adb13a4c399\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://19cb153fdd72887c57559e18117a87798bb19ff0f5d8f78527d1f06fdfac9e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6x7fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7vbfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.756715 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.758379 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.760677 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.763878 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9c9cf99-7a0b-4178-aa66-b771307149c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://527b9b2ed8353f00600d3385d2dd27e109b87532fe919428fc3fcd303846c1f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2efcdf49d4b3f8088542db77988c0d89b0543858ed507ac440a68dbdf5705732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.764252 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.764321 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.764338 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.764361 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.764375 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.767430 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.772311 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.773093 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.776076 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.777514 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.777901 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.779345 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.789043 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.790169 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.790909 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlfqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zntvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlfqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.792894 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.794154 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.795885 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.797590 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.801003 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.802871 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f377d71c-c91f-4a27-8276-7e06263de9f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://224a19824a39bcd1811e5de78454eaa50abc9730908edc0cad1179670251e933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xgrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://94f4e2cbc916293b4e6676fb0b3fe4568b76f062b4ce243281ad611c1958954a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xgrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-njl5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.804132 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.805226 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.806089 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.807787 5108 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.807938 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.810997 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.812829 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.814224 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.824926 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"732792c7-3389-4b84-88bd-7207a86bf590\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9b4eb4e10456fad30e3a03344ec2affe56bf2b509b098d5b2b3e0d405875b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4871dd57f0ecd21f2d7f2b64f2493a0612dd77b89b0feeff7852b3ea1421b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://255667ec678133d539daab501a5b98a62289ce5d0229da32b3582e57ad5a5c40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28b887516a54da7ea3f035c2831e5d2ceef4487d4328fb87020325e4818d991f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.828341 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.829576 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.833155 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.834807 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.835693 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.837135 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.838514 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.839999 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.840392 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.842190 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.843447 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.844273 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.845571 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.847173 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.848726 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.850405 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.852119 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.854774 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.857166 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdcf52cacbc491094ef37159fd0f8c07c157589e5415f23c4ac8b78649bd47f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66c49dcd651576d83f3cec1e94594be05ce6eef4d8e7f85c16a8e7424e958d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.867545 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.867602 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.867614 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.867639 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.867655 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.876750 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-rzs5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://7992dd8c360b5fa59546180b8358b2f8950b3b1a60bddb47c4b085abd26fee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rzs5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.890863 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2v7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z2v7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:11:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-d8pjz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.916642 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6220c537-1e01-468c-ade3-4489ff45c4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-04T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://5faa5d936dcf21f3645dc93fead84972db7b350c39f1ae1f4ba5ddb7af9d0f91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5600c53dc483245092b5d86d14ce5cd512c39f5cde0f47f32ba2d68c92d05cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9eb8b844800fe1d272ec5c719cd0db94d9da63d845e436f1afbafda9fcf5c3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2ea94b55e12c0f25dcd9c205306a29a282c096d4bbf535c91a6b5cc419be53f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:31Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4770a34a9314b95470ad00e2ab4b5d3dc56c2a21e54866222ebe78dcd2f04ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98857ef4501aaef6030f0f846b91a14f15880222b497d8721b729a811f9cc0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59b5133e349e7e5d7b721998724542bfa25fd017309a83749abbe4f38790799\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33de743a58f4b3abac7e4ee060e48ec3b0d12948e982e7d543847f2234fad921\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-04T00:10:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-04T00:10:29Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-04T00:10:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.945871 5108 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-04T00:11:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5eb9f09184a1abd0e30a0470232ca9aa91b8e269818766d8b4fb8e5cb595ea58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-04T00:11:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.970637 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.970696 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.970708 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.970729 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:56 crc kubenswrapper[5108]: I0104 00:11:56.970742 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:56Z","lastTransitionTime":"2026-01-04T00:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.052112 5108 generic.go:358] "Generic (PLEG): container finished" podID="1304679c-1853-474c-9796-e64e919305dd" containerID="b16279618f52b1b2a8e55afef485f23e6bc086f31f3c1e59680047893f657fbe" exitCode=0 Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.052197 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerDied","Data":"b16279618f52b1b2a8e55afef485f23e6bc086f31f3c1e59680047893f657fbe"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.055001 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" event={"ID":"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6","Type":"ContainerStarted","Data":"f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.057109 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerStarted","Data":"07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.072677 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.072730 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.072741 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.072758 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.072770 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:57Z","lastTransitionTime":"2026-01-04T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.174182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.174240 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.174253 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.174285 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.174298 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:57Z","lastTransitionTime":"2026-01-04T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.277712 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.277774 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.277786 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.277802 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.277815 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:57Z","lastTransitionTime":"2026-01-04T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.320337 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=3.320304365 podStartE2EDuration="3.320304365s" podCreationTimestamp="2026-01-04 00:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:11:57.27263825 +0000 UTC m=+91.261203346" watchObservedRunningTime="2026-01-04 00:11:57.320304365 +0000 UTC m=+91.308869451" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.359287 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.359264265 podStartE2EDuration="3.359264265s" podCreationTimestamp="2026-01-04 00:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:11:57.354342222 +0000 UTC m=+91.342907308" watchObservedRunningTime="2026-01-04 00:11:57.359264265 +0000 UTC m=+91.347829351" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.383902 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.383963 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.383973 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.383993 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.384005 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:57Z","lastTransitionTime":"2026-01-04T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.391884 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.429008 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-54hgz" podStartSLOduration=69.428983921 podStartE2EDuration="1m9.428983921s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:11:57.428539489 +0000 UTC m=+91.417104575" watchObservedRunningTime="2026-01-04 00:11:57.428983921 +0000 UTC m=+91.417549017" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.443003 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7vbfj" podStartSLOduration=69.442979421 podStartE2EDuration="1m9.442979421s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:11:57.442646712 +0000 UTC m=+91.431211808" watchObservedRunningTime="2026-01-04 00:11:57.442979421 +0000 UTC m=+91.431544507" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.457868 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.457834205 podStartE2EDuration="3.457834205s" podCreationTimestamp="2026-01-04 00:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:11:57.456796957 +0000 UTC m=+91.445362043" watchObservedRunningTime="2026-01-04 00:11:57.457834205 +0000 UTC m=+91.446399291" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.495187 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.495262 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.495274 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.495296 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.495312 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:57Z","lastTransitionTime":"2026-01-04T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.569346 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podStartSLOduration=69.569318696 podStartE2EDuration="1m9.569318696s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:11:57.568867364 +0000 UTC m=+91.557432470" watchObservedRunningTime="2026-01-04 00:11:57.569318696 +0000 UTC m=+91.557883782" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.604679 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.605157 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.605172 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.605192 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.605222 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:57Z","lastTransitionTime":"2026-01-04T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.613216 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=3.613181078 podStartE2EDuration="3.613181078s" podCreationTimestamp="2026-01-04 00:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:11:57.61212459 +0000 UTC m=+91.600689696" watchObservedRunningTime="2026-01-04 00:11:57.613181078 +0000 UTC m=+91.601746164" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.712178 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.712277 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.712328 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.712354 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.712369 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:57Z","lastTransitionTime":"2026-01-04T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.731156 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rzs5n" podStartSLOduration=69.731131225 podStartE2EDuration="1m9.731131225s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:11:57.729805569 +0000 UTC m=+91.718370665" watchObservedRunningTime="2026-01-04 00:11:57.731131225 +0000 UTC m=+91.719696321" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.768347 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" podStartSLOduration=68.768313635 podStartE2EDuration="1m8.768313635s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:11:57.767883954 +0000 UTC m=+91.756449040" watchObservedRunningTime="2026-01-04 00:11:57.768313635 +0000 UTC m=+91.756878731" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.815675 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.815731 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.815742 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.815766 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.815777 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:57Z","lastTransitionTime":"2026-01-04T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.920387 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.920925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.920939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.920958 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:57 crc kubenswrapper[5108]: I0104 00:11:57.920971 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:57Z","lastTransitionTime":"2026-01-04T00:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.023886 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.023938 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.023947 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.023964 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.023977 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.068989 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerStarted","Data":"44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.134488 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.134553 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.134566 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.134588 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.134603 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.176527 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.176635 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.176677 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.176705 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.176907 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.176930 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.176941 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.177010 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:02.176991796 +0000 UTC m=+96.165556882 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.177587 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.177633 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:02.177620604 +0000 UTC m=+96.166185690 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.177834 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.177896 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.177927 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.178016 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.178050 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:02.178020924 +0000 UTC m=+96.166586030 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.178180 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:02.178150358 +0000 UTC m=+96.166715444 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.238002 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.238076 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.238092 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.238120 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.238139 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.341857 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.341925 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.341937 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.341955 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.341964 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.379087 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.379281 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.379350 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:02.379307507 +0000 UTC m=+96.367872723 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.379428 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.379534 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs podName:6feab616-6edc-4a90-8ee9-f5ae1c2e80c5 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:02.379510652 +0000 UTC m=+96.368075738 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs") pod "network-metrics-daemon-mlfqf" (UID: "6feab616-6edc-4a90-8ee9-f5ae1c2e80c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.444498 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.444536 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.444546 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.444563 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.444573 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.448784 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.448941 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.449021 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.449073 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.449137 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.449187 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.449317 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:11:58 crc kubenswrapper[5108]: E0104 00:11:58.449372 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.547029 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.547086 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.547102 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.547128 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.547144 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.649449 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.649517 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.649530 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.649555 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.649575 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.753318 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.753402 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.753423 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.753453 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.753477 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.857024 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.857092 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.857120 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.857145 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.857163 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.959973 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.960026 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.960037 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.960056 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:58 crc kubenswrapper[5108]: I0104 00:11:58.960068 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:58Z","lastTransitionTime":"2026-01-04T00:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.062571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.062939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.062952 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.062970 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.062981 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.087394 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"037e611e22d1486c05d6e535ddb2431bd39cebb9253c0b0eb04962a69fad3129"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.092163 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerStarted","Data":"ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.092238 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerStarted","Data":"9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.092254 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerStarted","Data":"71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.095517 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerStarted","Data":"368a0528987432bdbbeb15bc41d25fc5f1b4930b7d7c682bb6dbe231c76fbf53"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.164971 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.165027 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.165038 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.165068 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.165084 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.268658 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.268721 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.268733 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.268758 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.268772 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.370548 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.370588 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.370597 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.370612 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.370621 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.472919 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.473182 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.473328 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.473439 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.473520 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.582866 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.583137 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.583220 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.583299 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.583358 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.685460 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.685510 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.685520 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.685536 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.685548 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.788521 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.788598 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.788617 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.788639 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.788653 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.890662 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.890720 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.890735 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.890757 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.890770 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.992978 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.993414 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.993498 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.993570 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:11:59 crc kubenswrapper[5108]: I0104 00:11:59.993636 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:11:59Z","lastTransitionTime":"2026-01-04T00:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.096489 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.096540 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.096552 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.096575 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.096590 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:00Z","lastTransitionTime":"2026-01-04T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.143126 5108 generic.go:358] "Generic (PLEG): container finished" podID="1304679c-1853-474c-9796-e64e919305dd" containerID="368a0528987432bdbbeb15bc41d25fc5f1b4930b7d7c682bb6dbe231c76fbf53" exitCode=0 Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.143269 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerDied","Data":"368a0528987432bdbbeb15bc41d25fc5f1b4930b7d7c682bb6dbe231c76fbf53"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.148629 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerStarted","Data":"2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.198947 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.199023 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.199046 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.199065 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.199079 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:00Z","lastTransitionTime":"2026-01-04T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.301672 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.301716 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.301727 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.301747 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.301760 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:00Z","lastTransitionTime":"2026-01-04T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.404997 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.405886 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.405900 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.405923 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.405937 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:00Z","lastTransitionTime":"2026-01-04T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.448772 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:00 crc kubenswrapper[5108]: E0104 00:12:00.448971 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.449036 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:00 crc kubenswrapper[5108]: E0104 00:12:00.449298 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.448791 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:00 crc kubenswrapper[5108]: E0104 00:12:00.449464 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.449510 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:00 crc kubenswrapper[5108]: E0104 00:12:00.449591 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.508399 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.508464 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.508475 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.508494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.508506 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:00Z","lastTransitionTime":"2026-01-04T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.611487 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.611536 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.611549 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.611565 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.611574 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:00Z","lastTransitionTime":"2026-01-04T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.714640 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.714716 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.714739 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.714768 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.714790 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:00Z","lastTransitionTime":"2026-01-04T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.817796 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.817874 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.817888 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.817911 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.817924 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:00Z","lastTransitionTime":"2026-01-04T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.921231 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.921302 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.921319 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.921343 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:00 crc kubenswrapper[5108]: I0104 00:12:00.921359 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:00Z","lastTransitionTime":"2026-01-04T00:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.024062 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.024116 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.024128 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.024143 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.024155 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.126541 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.126601 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.126616 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.126638 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.126653 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.229562 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.229634 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.229649 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.229669 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.229686 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.332001 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.332056 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.332077 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.332099 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.332111 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.434532 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.434595 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.434613 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.434635 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.434650 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.537387 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.537483 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.537505 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.537533 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.537553 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.640879 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.640979 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.640992 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.641012 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.641031 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.744185 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.744245 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.744255 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.744272 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.744281 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.846529 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.846630 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.846645 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.846687 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.846705 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.949951 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.950399 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.950540 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.950676 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:01 crc kubenswrapper[5108]: I0104 00:12:01.950792 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:01Z","lastTransitionTime":"2026-01-04T00:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.053924 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.054347 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.054492 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.054595 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.054661 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.156529 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.156595 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.156663 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.156687 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.156714 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.257122 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.257238 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.257309 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.257355 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257494 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257557 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257647 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257662 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257674 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257566 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:10.257548922 +0000 UTC m=+104.246114019 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257731 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:10.257718007 +0000 UTC m=+104.246283103 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257741 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257748 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:10.257739038 +0000 UTC m=+104.246304134 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257759 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257775 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.257826 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:10.257805229 +0000 UTC m=+104.246370315 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.259176 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.259240 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.259259 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.259279 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.259295 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.361155 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.361227 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.361238 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.361254 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.361271 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.448905 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.448927 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.449158 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.449182 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.449360 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.449516 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.449649 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.450071 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.459503 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.459761 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:10.459717088 +0000 UTC m=+104.448282214 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.460115 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.460408 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: E0104 00:12:02.460540 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs podName:6feab616-6edc-4a90-8ee9-f5ae1c2e80c5 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:10.46050694 +0000 UTC m=+104.449072056 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs") pod "network-metrics-daemon-mlfqf" (UID: "6feab616-6edc-4a90-8ee9-f5ae1c2e80c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.464359 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.464423 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.464448 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.464480 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.464504 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.567862 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.567916 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.567930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.567949 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.567965 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.671364 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.671461 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.671497 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.671527 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.671550 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.774812 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.774880 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.774889 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.774911 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.774925 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.877907 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.878268 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.878344 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.878454 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.878524 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.997998 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.998410 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.998483 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.998555 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:02 crc kubenswrapper[5108]: I0104 00:12:02.998618 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:02Z","lastTransitionTime":"2026-01-04T00:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.100807 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.101170 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.101271 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.101372 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.101509 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:03Z","lastTransitionTime":"2026-01-04T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.163043 5108 generic.go:358] "Generic (PLEG): container finished" podID="1304679c-1853-474c-9796-e64e919305dd" containerID="36b66f3ed12ca9b7058777f5b51ab5db3d9a6b4bf2f4e8053e9e63c2acc8a357" exitCode=0 Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.163295 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerDied","Data":"36b66f3ed12ca9b7058777f5b51ab5db3d9a6b4bf2f4e8053e9e63c2acc8a357"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.168283 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerStarted","Data":"b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.204522 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.204577 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.204591 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.204609 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.204623 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:03Z","lastTransitionTime":"2026-01-04T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.307438 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.307930 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.308038 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.308145 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.308268 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:03Z","lastTransitionTime":"2026-01-04T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.411040 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.411101 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.411114 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.411138 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.411152 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:03Z","lastTransitionTime":"2026-01-04T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.513442 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.513501 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.513513 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.513535 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.513548 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:03Z","lastTransitionTime":"2026-01-04T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.617933 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.618002 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.618019 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.618042 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.618053 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:03Z","lastTransitionTime":"2026-01-04T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.721414 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.721477 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.721493 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.721517 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.721533 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:03Z","lastTransitionTime":"2026-01-04T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.826338 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.826571 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.826897 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.827041 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.827149 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:03Z","lastTransitionTime":"2026-01-04T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.945517 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.945578 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.945600 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.945622 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:03 crc kubenswrapper[5108]: I0104 00:12:03.945634 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:03Z","lastTransitionTime":"2026-01-04T00:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.048682 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.049288 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.049334 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.049358 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.049614 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.152713 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.152767 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.152777 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.152793 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.152806 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.254889 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.254939 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.254953 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.254972 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.254990 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.357634 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.357691 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.357701 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.357723 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.357735 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.448159 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.448236 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:04 crc kubenswrapper[5108]: E0104 00:12:04.448424 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:12:04 crc kubenswrapper[5108]: E0104 00:12:04.448522 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.448663 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.448662 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:04 crc kubenswrapper[5108]: E0104 00:12:04.448761 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:12:04 crc kubenswrapper[5108]: E0104 00:12:04.448851 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.460972 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.461021 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.461034 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.461056 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.461071 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.563264 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.563319 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.563332 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.563353 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.563364 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.666409 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.666462 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.666472 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.666494 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.666508 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.769479 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.769544 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.769556 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.769577 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.769593 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.871953 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.872037 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.872049 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.872067 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.872079 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.975023 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.975097 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.975114 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.975143 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:04 crc kubenswrapper[5108]: I0104 00:12:04.975158 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:04Z","lastTransitionTime":"2026-01-04T00:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.076827 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.076879 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.076891 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.076911 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.076925 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:05Z","lastTransitionTime":"2026-01-04T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.179102 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.179160 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.179171 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.179190 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.179226 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:05Z","lastTransitionTime":"2026-01-04T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.281714 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.281766 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.281778 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.281796 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.281808 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:05Z","lastTransitionTime":"2026-01-04T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.384075 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.384121 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.384132 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.384151 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.384164 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:05Z","lastTransitionTime":"2026-01-04T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.486839 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.486908 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.486923 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.486944 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.486956 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:05Z","lastTransitionTime":"2026-01-04T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.589396 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.589468 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.589481 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.589503 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.589517 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:05Z","lastTransitionTime":"2026-01-04T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.692715 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.692796 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.692813 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.692839 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.692881 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:05Z","lastTransitionTime":"2026-01-04T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.795918 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.796558 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.796576 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.796616 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.796637 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:05Z","lastTransitionTime":"2026-01-04T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.823538 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.823594 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.823604 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.823627 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.823639 5108 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-04T00:12:05Z","lastTransitionTime":"2026-01-04T00:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 04 00:12:05 crc kubenswrapper[5108]: I0104 00:12:05.873272 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb"] Jan 04 00:12:06 crc kubenswrapper[5108]: I0104 00:12:06.561532 5108 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 04 00:12:06 crc kubenswrapper[5108]: I0104 00:12:06.576230 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.189107 5108 generic.go:358] "Generic (PLEG): container finished" podID="1304679c-1853-474c-9796-e64e919305dd" containerID="1656c79dab5f597cd90d663785770221724afdb89b3f9ed5515867f882e4d1d0" exitCode=0 Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.901049 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerStarted","Data":"1656c79dab5f597cd90d663785770221724afdb89b3f9ed5515867f882e4d1d0"} Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.901246 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:07 crc kubenswrapper[5108]: E0104 00:12:07.901992 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.902575 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.903028 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:07 crc kubenswrapper[5108]: E0104 00:12:07.903330 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.903379 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:07 crc kubenswrapper[5108]: E0104 00:12:07.903862 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.904972 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:07 crc kubenswrapper[5108]: E0104 00:12:07.905111 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.906767 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.907094 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.907150 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerDied","Data":"1656c79dab5f597cd90d663785770221724afdb89b3f9ed5515867f882e4d1d0"} Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.907295 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 04 00:12:07 crc kubenswrapper[5108]: I0104 00:12:07.909185 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.031078 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b9ed14b-d67a-4d77-8247-5463b9d0c983-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.031132 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b9ed14b-d67a-4d77-8247-5463b9d0c983-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.031226 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8b9ed14b-d67a-4d77-8247-5463b9d0c983-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.031263 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b9ed14b-d67a-4d77-8247-5463b9d0c983-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.031293 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8b9ed14b-d67a-4d77-8247-5463b9d0c983-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.132992 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8b9ed14b-d67a-4d77-8247-5463b9d0c983-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.133121 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b9ed14b-d67a-4d77-8247-5463b9d0c983-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.133131 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8b9ed14b-d67a-4d77-8247-5463b9d0c983-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.133149 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b9ed14b-d67a-4d77-8247-5463b9d0c983-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.133317 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8b9ed14b-d67a-4d77-8247-5463b9d0c983-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.133354 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b9ed14b-d67a-4d77-8247-5463b9d0c983-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.133490 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8b9ed14b-d67a-4d77-8247-5463b9d0c983-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.134035 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b9ed14b-d67a-4d77-8247-5463b9d0c983-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.141103 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b9ed14b-d67a-4d77-8247-5463b9d0c983-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.163530 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b9ed14b-d67a-4d77-8247-5463b9d0c983-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-5nqlb\" (UID: \"8b9ed14b-d67a-4d77-8247-5463b9d0c983\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.199676 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerStarted","Data":"4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981"} Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.200337 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.200517 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.200559 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.224157 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.232523 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.233839 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.268412 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" podStartSLOduration=79.268395596 podStartE2EDuration="1m19.268395596s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:08.266552885 +0000 UTC m=+102.255118001" watchObservedRunningTime="2026-01-04 00:12:08.268395596 +0000 UTC m=+102.256960692" Jan 04 00:12:08 crc kubenswrapper[5108]: I0104 00:12:08.450417 5108 scope.go:117] "RemoveContainer" containerID="001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e" Jan 04 00:12:08 crc kubenswrapper[5108]: E0104 00:12:08.455500 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 04 00:12:09 crc kubenswrapper[5108]: I0104 00:12:09.204263 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" event={"ID":"8b9ed14b-d67a-4d77-8247-5463b9d0c983","Type":"ContainerStarted","Data":"65945edad11349b4c9b40d234c0e8567df54d4f5c4b90a67e74145565b2ac06a"} Jan 04 00:12:09 crc kubenswrapper[5108]: I0104 00:12:09.204901 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" event={"ID":"8b9ed14b-d67a-4d77-8247-5463b9d0c983","Type":"ContainerStarted","Data":"615badd5715bf263a66a619bb4d749f885ef2b516e3f40663dc5fbe445d70361"} Jan 04 00:12:09 crc kubenswrapper[5108]: I0104 00:12:09.208727 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerStarted","Data":"f4881b78c0896f6ebc2b263edb786ae73d95c12e3dde60156dc8cc6cccc79107"} Jan 04 00:12:09 crc kubenswrapper[5108]: I0104 00:12:09.220179 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5nqlb" podStartSLOduration=81.220166291 podStartE2EDuration="1m21.220166291s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:09.219865733 +0000 UTC m=+103.208430869" watchObservedRunningTime="2026-01-04 00:12:09.220166291 +0000 UTC m=+103.208731377" Jan 04 00:12:09 crc kubenswrapper[5108]: I0104 00:12:09.448481 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:09 crc kubenswrapper[5108]: E0104 00:12:09.448663 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:12:09 crc kubenswrapper[5108]: I0104 00:12:09.449274 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:09 crc kubenswrapper[5108]: E0104 00:12:09.449348 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:12:09 crc kubenswrapper[5108]: I0104 00:12:09.449400 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:09 crc kubenswrapper[5108]: I0104 00:12:09.449434 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:09 crc kubenswrapper[5108]: E0104 00:12:09.449647 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:12:09 crc kubenswrapper[5108]: E0104 00:12:09.449447 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:12:10 crc kubenswrapper[5108]: I0104 00:12:10.218764 5108 generic.go:358] "Generic (PLEG): container finished" podID="1304679c-1853-474c-9796-e64e919305dd" containerID="f4881b78c0896f6ebc2b263edb786ae73d95c12e3dde60156dc8cc6cccc79107" exitCode=0 Jan 04 00:12:10 crc kubenswrapper[5108]: I0104 00:12:10.218875 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerDied","Data":"f4881b78c0896f6ebc2b263edb786ae73d95c12e3dde60156dc8cc6cccc79107"} Jan 04 00:12:10 crc kubenswrapper[5108]: I0104 00:12:10.260730 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:10 crc kubenswrapper[5108]: I0104 00:12:10.260787 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:10 crc kubenswrapper[5108]: I0104 00:12:10.260814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:10 crc kubenswrapper[5108]: I0104 00:12:10.260981 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.260859 5108 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261040 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261080 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.26106623 +0000 UTC m=+120.249631316 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261089 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261113 5108 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.260931 5108 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261250 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.261184353 +0000 UTC m=+120.249749479 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261393 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.261344777 +0000 UTC m=+120.249909883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261602 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261643 5108 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261660 5108 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.261757 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.261731747 +0000 UTC m=+120.250296843 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 04 00:12:10 crc kubenswrapper[5108]: I0104 00:12:10.464505 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:10 crc kubenswrapper[5108]: I0104 00:12:10.464684 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.465010 5108 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.465022 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.464980003 +0000 UTC m=+120.453545109 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:10 crc kubenswrapper[5108]: E0104 00:12:10.465159 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs podName:6feab616-6edc-4a90-8ee9-f5ae1c2e80c5 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.465126608 +0000 UTC m=+120.453691704 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs") pod "network-metrics-daemon-mlfqf" (UID: "6feab616-6edc-4a90-8ee9-f5ae1c2e80c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 04 00:12:11 crc kubenswrapper[5108]: I0104 00:12:11.227821 5108 generic.go:358] "Generic (PLEG): container finished" podID="1304679c-1853-474c-9796-e64e919305dd" containerID="80667374a3ec4ca473494cfc90490c93c55784c2e55ca06947e434dbe6cbceb6" exitCode=0 Jan 04 00:12:11 crc kubenswrapper[5108]: I0104 00:12:11.227938 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerDied","Data":"80667374a3ec4ca473494cfc90490c93c55784c2e55ca06947e434dbe6cbceb6"} Jan 04 00:12:11 crc kubenswrapper[5108]: I0104 00:12:11.501428 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:11 crc kubenswrapper[5108]: E0104 00:12:11.502852 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:12:11 crc kubenswrapper[5108]: I0104 00:12:11.503225 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:11 crc kubenswrapper[5108]: E0104 00:12:11.503379 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:12:11 crc kubenswrapper[5108]: I0104 00:12:11.503468 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:11 crc kubenswrapper[5108]: E0104 00:12:11.503531 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:12:11 crc kubenswrapper[5108]: I0104 00:12:11.503600 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:11 crc kubenswrapper[5108]: E0104 00:12:11.503662 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:12:11 crc kubenswrapper[5108]: I0104 00:12:11.822900 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mlfqf"] Jan 04 00:12:12 crc kubenswrapper[5108]: I0104 00:12:12.237983 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" event={"ID":"1304679c-1853-474c-9796-e64e919305dd","Type":"ContainerStarted","Data":"f7d6e4a53564eee889b904188e0474d770db39379a48199c63859dd5faf11702"} Jan 04 00:12:12 crc kubenswrapper[5108]: I0104 00:12:12.238044 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:12 crc kubenswrapper[5108]: E0104 00:12:12.239715 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:12:13 crc kubenswrapper[5108]: I0104 00:12:13.448525 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:13 crc kubenswrapper[5108]: I0104 00:12:13.448596 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:13 crc kubenswrapper[5108]: I0104 00:12:13.448653 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:13 crc kubenswrapper[5108]: I0104 00:12:13.448674 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:13 crc kubenswrapper[5108]: E0104 00:12:13.450536 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:12:13 crc kubenswrapper[5108]: E0104 00:12:13.451075 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:12:13 crc kubenswrapper[5108]: E0104 00:12:13.451240 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:12:13 crc kubenswrapper[5108]: E0104 00:12:13.451322 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:12:15 crc kubenswrapper[5108]: I0104 00:12:15.447880 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:15 crc kubenswrapper[5108]: I0104 00:12:15.447880 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:15 crc kubenswrapper[5108]: E0104 00:12:15.448681 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 04 00:12:15 crc kubenswrapper[5108]: I0104 00:12:15.448924 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:15 crc kubenswrapper[5108]: E0104 00:12:15.449268 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 04 00:12:15 crc kubenswrapper[5108]: I0104 00:12:15.449291 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:15 crc kubenswrapper[5108]: E0104 00:12:15.449325 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlfqf" podUID="6feab616-6edc-4a90-8ee9-f5ae1c2e80c5" Jan 04 00:12:15 crc kubenswrapper[5108]: E0104 00:12:15.449431 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 04 00:12:16 crc kubenswrapper[5108]: I0104 00:12:16.875667 5108 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 04 00:12:16 crc kubenswrapper[5108]: I0104 00:12:16.875923 5108 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 04 00:12:16 crc kubenswrapper[5108]: I0104 00:12:16.926872 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7kzr9" podStartSLOduration=88.926844609 podStartE2EDuration="1m28.926844609s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:12.274318563 +0000 UTC m=+106.262883729" watchObservedRunningTime="2026-01-04 00:12:16.926844609 +0000 UTC m=+110.915409705" Jan 04 00:12:16 crc kubenswrapper[5108]: I0104 00:12:16.929328 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-7llq6"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.090021 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.090321 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.093760 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.093831 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.094124 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-jzcn5"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.094346 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.094646 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.097132 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.097419 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.098778 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.099056 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.099130 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.099327 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.099382 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.099413 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.099476 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.100328 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.100851 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.101064 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.101172 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.102063 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.103223 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.103746 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.104356 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.104594 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.104819 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.107996 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.111615 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.112901 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.113361 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.111619 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.114621 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.114660 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.122219 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.122470 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.162183 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.162568 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.162755 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.162994 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.163169 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.172971 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-images\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173038 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-config\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173061 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af85dc64-1599-4534-8cc4-be005c8893c3-tmp\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173079 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21fce9b3-74a6-4ddd-9011-f891ea99e09c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173101 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21fce9b3-74a6-4ddd-9011-f891ea99e09c-audit-policies\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173244 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/21fce9b3-74a6-4ddd-9011-f891ea99e09c-etcd-client\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173310 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21fce9b3-74a6-4ddd-9011-f891ea99e09c-serving-cert\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173355 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173393 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7578c202-c52d-4bd7-b125-b369e37a7cb7-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-fc5v8\" (UID: \"7578c202-c52d-4bd7-b125-b369e37a7cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173491 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/21fce9b3-74a6-4ddd-9011-f891ea99e09c-etcd-serving-ca\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173530 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/21fce9b3-74a6-4ddd-9011-f891ea99e09c-encryption-config\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173558 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0f0b110c-a11e-4e78-8e42-10c104fcf868-available-featuregates\") pod \"openshift-config-operator-5777786469-7llq6\" (UID: \"0f0b110c-a11e-4e78-8e42-10c104fcf868\") " pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173595 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-config\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173655 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21fce9b3-74a6-4ddd-9011-f891ea99e09c-audit-dir\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173698 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af85dc64-1599-4534-8cc4-be005c8893c3-serving-cert\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173724 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzrn\" (UniqueName: \"kubernetes.io/projected/21fce9b3-74a6-4ddd-9011-f891ea99e09c-kube-api-access-xvzrn\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173837 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt9rs\" (UniqueName: \"kubernetes.io/projected/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-kube-api-access-mt9rs\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173913 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7578c202-c52d-4bd7-b125-b369e37a7cb7-config\") pod \"openshift-apiserver-operator-846cbfc458-fc5v8\" (UID: \"7578c202-c52d-4bd7-b125-b369e37a7cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.173983 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45gqq\" (UniqueName: \"kubernetes.io/projected/7578c202-c52d-4bd7-b125-b369e37a7cb7-kube-api-access-45gqq\") pod \"openshift-apiserver-operator-846cbfc458-fc5v8\" (UID: \"7578c202-c52d-4bd7-b125-b369e37a7cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.174033 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85n9\" (UniqueName: \"kubernetes.io/projected/0f0b110c-a11e-4e78-8e42-10c104fcf868-kube-api-access-m85n9\") pod \"openshift-config-operator-5777786469-7llq6\" (UID: \"0f0b110c-a11e-4e78-8e42-10c104fcf868\") " pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.174062 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-client-ca\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.174085 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vpwq\" (UniqueName: \"kubernetes.io/projected/af85dc64-1599-4534-8cc4-be005c8893c3-kube-api-access-4vpwq\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.174121 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f0b110c-a11e-4e78-8e42-10c104fcf868-serving-cert\") pod \"openshift-config-operator-5777786469-7llq6\" (UID: \"0f0b110c-a11e-4e78-8e42-10c104fcf868\") " pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.275413 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7578c202-c52d-4bd7-b125-b369e37a7cb7-config\") pod \"openshift-apiserver-operator-846cbfc458-fc5v8\" (UID: \"7578c202-c52d-4bd7-b125-b369e37a7cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276027 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45gqq\" (UniqueName: \"kubernetes.io/projected/7578c202-c52d-4bd7-b125-b369e37a7cb7-kube-api-access-45gqq\") pod \"openshift-apiserver-operator-846cbfc458-fc5v8\" (UID: \"7578c202-c52d-4bd7-b125-b369e37a7cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276108 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m85n9\" (UniqueName: \"kubernetes.io/projected/0f0b110c-a11e-4e78-8e42-10c104fcf868-kube-api-access-m85n9\") pod \"openshift-config-operator-5777786469-7llq6\" (UID: \"0f0b110c-a11e-4e78-8e42-10c104fcf868\") " pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276139 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-client-ca\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276168 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4vpwq\" (UniqueName: \"kubernetes.io/projected/af85dc64-1599-4534-8cc4-be005c8893c3-kube-api-access-4vpwq\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276194 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f0b110c-a11e-4e78-8e42-10c104fcf868-serving-cert\") pod \"openshift-config-operator-5777786469-7llq6\" (UID: \"0f0b110c-a11e-4e78-8e42-10c104fcf868\") " pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276282 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-images\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276311 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-config\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276336 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af85dc64-1599-4534-8cc4-be005c8893c3-tmp\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276359 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21fce9b3-74a6-4ddd-9011-f891ea99e09c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276396 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21fce9b3-74a6-4ddd-9011-f891ea99e09c-audit-policies\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276439 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/21fce9b3-74a6-4ddd-9011-f891ea99e09c-etcd-client\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276472 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21fce9b3-74a6-4ddd-9011-f891ea99e09c-serving-cert\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276507 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276533 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7578c202-c52d-4bd7-b125-b369e37a7cb7-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-fc5v8\" (UID: \"7578c202-c52d-4bd7-b125-b369e37a7cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276578 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/21fce9b3-74a6-4ddd-9011-f891ea99e09c-etcd-serving-ca\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276613 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/21fce9b3-74a6-4ddd-9011-f891ea99e09c-encryption-config\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276637 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0f0b110c-a11e-4e78-8e42-10c104fcf868-available-featuregates\") pod \"openshift-config-operator-5777786469-7llq6\" (UID: \"0f0b110c-a11e-4e78-8e42-10c104fcf868\") " pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276664 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-config\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276684 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7578c202-c52d-4bd7-b125-b369e37a7cb7-config\") pod \"openshift-apiserver-operator-846cbfc458-fc5v8\" (UID: \"7578c202-c52d-4bd7-b125-b369e37a7cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276712 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21fce9b3-74a6-4ddd-9011-f891ea99e09c-audit-dir\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276772 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21fce9b3-74a6-4ddd-9011-f891ea99e09c-audit-dir\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276788 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af85dc64-1599-4534-8cc4-be005c8893c3-serving-cert\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276843 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvzrn\" (UniqueName: \"kubernetes.io/projected/21fce9b3-74a6-4ddd-9011-f891ea99e09c-kube-api-access-xvzrn\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.276922 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mt9rs\" (UniqueName: \"kubernetes.io/projected/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-kube-api-access-mt9rs\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.280460 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-images\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.280560 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-client-ca\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.280566 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0f0b110c-a11e-4e78-8e42-10c104fcf868-available-featuregates\") pod \"openshift-config-operator-5777786469-7llq6\" (UID: \"0f0b110c-a11e-4e78-8e42-10c104fcf868\") " pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.280649 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21fce9b3-74a6-4ddd-9011-f891ea99e09c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.281129 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af85dc64-1599-4534-8cc4-be005c8893c3-tmp\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.281535 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21fce9b3-74a6-4ddd-9011-f891ea99e09c-audit-policies\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.281624 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/21fce9b3-74a6-4ddd-9011-f891ea99e09c-etcd-serving-ca\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.281730 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-config\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.282621 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-config\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.282806 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.283068 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.288302 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21fce9b3-74a6-4ddd-9011-f891ea99e09c-serving-cert\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.288917 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/21fce9b3-74a6-4ddd-9011-f891ea99e09c-etcd-client\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.289487 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7578c202-c52d-4bd7-b125-b369e37a7cb7-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-fc5v8\" (UID: \"7578c202-c52d-4bd7-b125-b369e37a7cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.289879 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/21fce9b3-74a6-4ddd-9011-f891ea99e09c-encryption-config\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.291470 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.291963 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af85dc64-1599-4534-8cc4-be005c8893c3-serving-cert\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.296234 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.296280 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.296379 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.296497 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.299291 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvzrn\" (UniqueName: \"kubernetes.io/projected/21fce9b3-74a6-4ddd-9011-f891ea99e09c-kube-api-access-xvzrn\") pod \"apiserver-8596bd845d-7bpfz\" (UID: \"21fce9b3-74a6-4ddd-9011-f891ea99e09c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.299553 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f0b110c-a11e-4e78-8e42-10c104fcf868-serving-cert\") pod \"openshift-config-operator-5777786469-7llq6\" (UID: \"0f0b110c-a11e-4e78-8e42-10c104fcf868\") " pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.302657 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45gqq\" (UniqueName: \"kubernetes.io/projected/7578c202-c52d-4bd7-b125-b369e37a7cb7-kube-api-access-45gqq\") pod \"openshift-apiserver-operator-846cbfc458-fc5v8\" (UID: \"7578c202-c52d-4bd7-b125-b369e37a7cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.303239 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m85n9\" (UniqueName: \"kubernetes.io/projected/0f0b110c-a11e-4e78-8e42-10c104fcf868-kube-api-access-m85n9\") pod \"openshift-config-operator-5777786469-7llq6\" (UID: \"0f0b110c-a11e-4e78-8e42-10c104fcf868\") " pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.303648 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vpwq\" (UniqueName: \"kubernetes.io/projected/af85dc64-1599-4534-8cc4-be005c8893c3-kube-api-access-4vpwq\") pod \"route-controller-manager-776cdc94d6-52hzh\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.305881 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt9rs\" (UniqueName: \"kubernetes.io/projected/948d9eda-ff2a-4ee3-913b-6a3f19481ee5-kube-api-access-mt9rs\") pod \"machine-api-operator-755bb95488-jzcn5\" (UID: \"948d9eda-ff2a-4ee3-913b-6a3f19481ee5\") " pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.334940 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-h5ft9"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.335078 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.338680 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.338824 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.338736 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.338725 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.339193 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.351116 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pppml"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.351288 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.355816 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29458080-vx5nr"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.356030 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.357583 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.357713 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.358026 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.358369 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.362868 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.363936 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.364141 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.364502 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.364944 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.365435 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.365637 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.365697 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.365927 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.365980 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.368782 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.368871 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.368931 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.378784 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255cfe17-72db-413a-8baa-b17a27bb2531-config\") pod \"kube-storage-version-migrator-operator-565b79b866-pzxxm\" (UID: \"255cfe17-72db-413a-8baa-b17a27bb2531\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.378830 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-tmp\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.378859 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkk4w\" (UniqueName: \"kubernetes.io/projected/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-kube-api-access-zkk4w\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.378910 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-audit\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.378932 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfdt\" (UniqueName: \"kubernetes.io/projected/255cfe17-72db-413a-8baa-b17a27bb2531-kube-api-access-qzfdt\") pod \"kube-storage-version-migrator-operator-565b79b866-pzxxm\" (UID: \"255cfe17-72db-413a-8baa-b17a27bb2531\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.378975 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d021a5-d9a4-4860-9edd-02555049f552-serving-cert\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.378998 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-serving-cert\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379022 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvjk8\" (UniqueName: \"kubernetes.io/projected/47d021a5-d9a4-4860-9edd-02555049f552-kube-api-access-mvjk8\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379064 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/47d021a5-d9a4-4860-9edd-02555049f552-etcd-client\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379092 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47d021a5-d9a4-4860-9edd-02555049f552-node-pullsecrets\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379114 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379136 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-config\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379160 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379180 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47d021a5-d9a4-4860-9edd-02555049f552-audit-dir\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379219 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379240 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-image-import-ca\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379259 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/47d021a5-d9a4-4860-9edd-02555049f552-encryption-config\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379285 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58sk\" (UniqueName: \"kubernetes.io/projected/c20962fb-7828-40e8-854e-09cf60a0becd-kube-api-access-b58sk\") pod \"cluster-samples-operator-6b564684c8-s77qp\" (UID: \"c20962fb-7828-40e8-854e-09cf60a0becd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379319 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/255cfe17-72db-413a-8baa-b17a27bb2531-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-pzxxm\" (UID: \"255cfe17-72db-413a-8baa-b17a27bb2531\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379342 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c20962fb-7828-40e8-854e-09cf60a0becd-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-s77qp\" (UID: \"c20962fb-7828-40e8-854e-09cf60a0becd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379365 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-config\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379381 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-client-ca\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.379788 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.383294 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.384740 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.386845 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.387226 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.391575 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-srgq4"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.391647 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.393523 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.393834 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.398124 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-bxnjs"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.398313 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.398442 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.398689 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.398910 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.399316 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.401565 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.401756 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.401815 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.401831 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.401970 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.405309 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.405443 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.405805 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.405865 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.407017 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.407563 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.407630 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.408450 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.408717 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.409175 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.409325 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.409404 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.409548 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.411694 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.411866 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.417830 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.418682 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.425265 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.425532 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-6nmg2"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.426860 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.436139 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.439641 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-wl97g"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.439767 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.439894 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.441506 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.442567 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.442599 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.453112 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.468495 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.469521 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.469541 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.473289 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.473504 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.474075 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.475116 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.476018 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.477043 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.477119 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480292 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx49k\" (UniqueName: \"kubernetes.io/projected/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-kube-api-access-nx49k\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480331 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc782574-9478-4d61-a46b-b592c4b8a20d-config\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480407 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480437 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d98b3678-6b19-4259-b726-bf6940b01cbf-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480486 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/47d021a5-d9a4-4860-9edd-02555049f552-etcd-client\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480512 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-service-ca-bundle\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480543 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47d021a5-d9a4-4860-9edd-02555049f552-node-pullsecrets\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480568 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480593 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-stats-auth\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480614 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6t8v\" (UniqueName: \"kubernetes.io/projected/52146c21-3246-4f94-b1ac-d912a24401ab-kube-api-access-p6t8v\") pod \"image-pruner-29458080-vx5nr\" (UID: \"52146c21-3246-4f94-b1ac-d912a24401ab\") " pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480640 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-config\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480668 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480696 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480720 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47d021a5-d9a4-4860-9edd-02555049f552-audit-dir\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480739 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d98b3678-6b19-4259-b726-bf6940b01cbf-config\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.480848 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47d021a5-d9a4-4860-9edd-02555049f552-node-pullsecrets\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.481562 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-config\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.481613 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482172 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47d021a5-d9a4-4860-9edd-02555049f552-audit-dir\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482274 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482316 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-image-import-ca\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482336 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/47d021a5-d9a4-4860-9edd-02555049f552-encryption-config\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482365 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc782574-9478-4d61-a46b-b592c4b8a20d-serving-cert\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482399 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d98b3678-6b19-4259-b726-bf6940b01cbf-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482427 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482467 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b58sk\" (UniqueName: \"kubernetes.io/projected/c20962fb-7828-40e8-854e-09cf60a0becd-kube-api-access-b58sk\") pod \"cluster-samples-operator-6b564684c8-s77qp\" (UID: \"c20962fb-7828-40e8-854e-09cf60a0becd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482487 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqw6\" (UniqueName: \"kubernetes.io/projected/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-kube-api-access-mgqw6\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482502 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482534 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-policies\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.482562 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx79q\" (UniqueName: \"kubernetes.io/projected/bc782574-9478-4d61-a46b-b592c4b8a20d-kube-api-access-mx79q\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483131 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483175 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/255cfe17-72db-413a-8baa-b17a27bb2531-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-pzxxm\" (UID: \"255cfe17-72db-413a-8baa-b17a27bb2531\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483246 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c20962fb-7828-40e8-854e-09cf60a0becd-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-s77qp\" (UID: \"c20962fb-7828-40e8-854e-09cf60a0becd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483283 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc782574-9478-4d61-a46b-b592c4b8a20d-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483309 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-config\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483335 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-client-ca\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483361 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc782574-9478-4d61-a46b-b592c4b8a20d-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483385 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483410 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-machine-approver-tls\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483439 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d98b3678-6b19-4259-b726-bf6940b01cbf-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483465 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483489 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483601 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483631 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255cfe17-72db-413a-8baa-b17a27bb2531-config\") pod \"kube-storage-version-migrator-operator-565b79b866-pzxxm\" (UID: \"255cfe17-72db-413a-8baa-b17a27bb2531\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483653 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-tmp\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483674 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkk4w\" (UniqueName: \"kubernetes.io/projected/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-kube-api-access-zkk4w\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483696 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/52146c21-3246-4f94-b1ac-d912a24401ab-serviceca\") pod \"image-pruner-29458080-vx5nr\" (UID: \"52146c21-3246-4f94-b1ac-d912a24401ab\") " pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483743 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-auth-proxy-config\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483767 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-audit\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483793 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfdt\" (UniqueName: \"kubernetes.io/projected/255cfe17-72db-413a-8baa-b17a27bb2531-kube-api-access-qzfdt\") pod \"kube-storage-version-migrator-operator-565b79b866-pzxxm\" (UID: \"255cfe17-72db-413a-8baa-b17a27bb2531\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483816 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483836 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf6hj\" (UniqueName: \"kubernetes.io/projected/0ed21f10-7015-400b-bd89-9b5ba497be04-kube-api-access-zf6hj\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483857 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-config\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483902 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d021a5-d9a4-4860-9edd-02555049f552-serving-cert\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483930 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483958 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-dir\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.483980 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-metrics-certs\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.484004 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-serving-cert\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.484025 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mvjk8\" (UniqueName: \"kubernetes.io/projected/47d021a5-d9a4-4860-9edd-02555049f552-kube-api-access-mvjk8\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.484340 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-default-certificate\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.486404 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.486991 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-image-import-ca\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.487240 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.486998 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/47d021a5-d9a4-4860-9edd-02555049f552-etcd-client\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.487664 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.487855 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255cfe17-72db-413a-8baa-b17a27bb2531-config\") pod \"kube-storage-version-migrator-operator-565b79b866-pzxxm\" (UID: \"255cfe17-72db-413a-8baa-b17a27bb2531\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.488637 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-client-ca\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.489487 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-tmp\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.490776 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c20962fb-7828-40e8-854e-09cf60a0becd-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-s77qp\" (UID: \"c20962fb-7828-40e8-854e-09cf60a0becd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.491043 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d021a5-d9a4-4860-9edd-02555049f552-serving-cert\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.492065 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-config\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.492394 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.494562 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/47d021a5-d9a4-4860-9edd-02555049f552-encryption-config\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.496023 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/255cfe17-72db-413a-8baa-b17a27bb2531-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-pzxxm\" (UID: \"255cfe17-72db-413a-8baa-b17a27bb2531\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.496073 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.496544 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-serving-cert\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.498849 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/47d021a5-d9a4-4860-9edd-02555049f552-audit\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.501601 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.509332 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.514483 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.534025 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.538822 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-tvrx6"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.539111 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.547404 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-glcdh"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.548115 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.551855 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.552142 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-glcdh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.554406 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-j22zl"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.555479 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.556661 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.561184 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586346 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/52146c21-3246-4f94-b1ac-d912a24401ab-serviceca\") pod \"image-pruner-29458080-vx5nr\" (UID: \"52146c21-3246-4f94-b1ac-d912a24401ab\") " pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586419 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/684e8e97-32b5-46c7-b3e0-0d89c55d7214-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-cbp9q\" (UID: \"684e8e97-32b5-46c7-b3e0-0d89c55d7214\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586448 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e38c1fa-0767-4ade-86be-f890237f9c94-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586469 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-etcd-client\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586488 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-serving-cert\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586508 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-auth-proxy-config\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586529 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nblmx\" (UniqueName: \"kubernetes.io/projected/68f75634-8fb1-40a4-801d-6355d62d81f8-kube-api-access-nblmx\") pod \"downloads-747b44746d-glcdh\" (UID: \"68f75634-8fb1-40a4-801d-6355d62d81f8\") " pod="openshift-console/downloads-747b44746d-glcdh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586553 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586572 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zf6hj\" (UniqueName: \"kubernetes.io/projected/0ed21f10-7015-400b-bd89-9b5ba497be04-kube-api-access-zf6hj\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586593 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586608 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586632 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-config\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586647 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-config\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586667 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngbdw\" (UniqueName: \"kubernetes.io/projected/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-kube-api-access-ngbdw\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586701 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586721 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-dir\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586742 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-metrics-certs\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586758 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-serving-cert\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586780 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-config\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586796 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzmsr\" (UniqueName: \"kubernetes.io/projected/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-kube-api-access-rzmsr\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586816 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-default-certificate\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586833 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nx49k\" (UniqueName: \"kubernetes.io/projected/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-kube-api-access-nx49k\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586857 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxp4n\" (UniqueName: \"kubernetes.io/projected/6a476be9-e3a0-47e4-ab8f-29a4601a9134-kube-api-access-xxp4n\") pod \"dns-operator-799b87ffcd-tvrx6\" (UID: \"6a476be9-e3a0-47e4-ab8f-29a4601a9134\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.586979 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-etcd-ca\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587021 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-tmp-dir\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587071 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc782574-9478-4d61-a46b-b592c4b8a20d-config\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587099 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587140 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d98b3678-6b19-4259-b726-bf6940b01cbf-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587163 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gddnc\" (UniqueName: \"kubernetes.io/projected/684e8e97-32b5-46c7-b3e0-0d89c55d7214-kube-api-access-gddnc\") pod \"machine-config-controller-f9cdd68f7-cbp9q\" (UID: \"684e8e97-32b5-46c7-b3e0-0d89c55d7214\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587185 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wglfk\" (UniqueName: \"kubernetes.io/projected/3e38c1fa-0767-4ade-86be-f890237f9c94-kube-api-access-wglfk\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587280 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-service-ca-bundle\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587303 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6a476be9-e3a0-47e4-ab8f-29a4601a9134-tmp-dir\") pod \"dns-operator-799b87ffcd-tvrx6\" (UID: \"6a476be9-e3a0-47e4-ab8f-29a4601a9134\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587325 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-tmp\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587352 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e38c1fa-0767-4ade-86be-f890237f9c94-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587375 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-stats-auth\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587404 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p6t8v\" (UniqueName: \"kubernetes.io/projected/52146c21-3246-4f94-b1ac-d912a24401ab-kube-api-access-p6t8v\") pod \"image-pruner-29458080-vx5nr\" (UID: \"52146c21-3246-4f94-b1ac-d912a24401ab\") " pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587435 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587455 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qmkl\" (UniqueName: \"kubernetes.io/projected/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-kube-api-access-2qmkl\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587498 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d98b3678-6b19-4259-b726-bf6940b01cbf-config\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587520 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75gpk\" (UniqueName: \"kubernetes.io/projected/a07ebe6a-ff42-4584-8503-9afefb4bcee1-kube-api-access-75gpk\") pod \"migrator-866fcbc849-s5hd7\" (UID: \"a07ebe6a-ff42-4584-8503-9afefb4bcee1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587554 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc782574-9478-4d61-a46b-b592c4b8a20d-serving-cert\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587573 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d98b3678-6b19-4259-b726-bf6940b01cbf-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587594 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587595 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-config\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587629 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqw6\" (UniqueName: \"kubernetes.io/projected/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-kube-api-access-mgqw6\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587684 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-policies\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587704 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587729 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mx79q\" (UniqueName: \"kubernetes.io/projected/bc782574-9478-4d61-a46b-b592c4b8a20d-kube-api-access-mx79q\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587753 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587775 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-etcd-service-ca\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587795 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587833 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc782574-9478-4d61-a46b-b592c4b8a20d-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587851 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587877 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc782574-9478-4d61-a46b-b592c4b8a20d-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587896 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587914 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-machine-approver-tls\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587935 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/684e8e97-32b5-46c7-b3e0-0d89c55d7214-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-cbp9q\" (UID: \"684e8e97-32b5-46c7-b3e0-0d89c55d7214\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587971 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d98b3678-6b19-4259-b726-bf6940b01cbf-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.587990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.588011 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.588030 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.588056 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e38c1fa-0767-4ade-86be-f890237f9c94-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.588075 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a476be9-e3a0-47e4-ab8f-29a4601a9134-metrics-tls\") pod \"dns-operator-799b87ffcd-tvrx6\" (UID: \"6a476be9-e3a0-47e4-ab8f-29a4601a9134\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.588093 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-trusted-ca\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.588437 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.589432 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc782574-9478-4d61-a46b-b592c4b8a20d-config\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.590243 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-dir\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.591640 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/52146c21-3246-4f94-b1ac-d912a24401ab-serviceca\") pod \"image-pruner-29458080-vx5nr\" (UID: \"52146c21-3246-4f94-b1ac-d912a24401ab\") " pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.593227 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-auth-proxy-config\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.593788 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.597854 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.597937 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d98b3678-6b19-4259-b726-bf6940b01cbf-config\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.598983 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d98b3678-6b19-4259-b726-bf6940b01cbf-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.602118 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc782574-9478-4d61-a46b-b592c4b8a20d-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.599883 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-policies\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.600514 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc782574-9478-4d61-a46b-b592c4b8a20d-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.600667 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.600903 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d98b3678-6b19-4259-b726-bf6940b01cbf-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.601078 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.599429 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-metrics-certs\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.603060 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.604474 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.605297 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-machine-approver-tls\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.606543 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.614192 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc782574-9478-4d61-a46b-b592c4b8a20d-serving-cert\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.615088 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.617131 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.617270 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.618392 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-stats-auth\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.618769 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.619891 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.622330 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-shks7"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.622597 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-default-certificate\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.622647 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.622842 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.625735 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-service-ca-bundle\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.636923 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.638236 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.639155 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.642552 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-tptrl"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.642694 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.645758 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nbqsh"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.653245 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.653428 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.654044 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.656739 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.660531 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-rsjsp"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.660701 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.669149 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.669572 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.671937 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.674856 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.675215 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.693950 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695059 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6a476be9-e3a0-47e4-ab8f-29a4601a9134-tmp-dir\") pod \"dns-operator-799b87ffcd-tvrx6\" (UID: \"6a476be9-e3a0-47e4-ab8f-29a4601a9134\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695128 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-console-config\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695284 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75gpk\" (UniqueName: \"kubernetes.io/projected/a07ebe6a-ff42-4584-8503-9afefb4bcee1-kube-api-access-75gpk\") pod \"migrator-866fcbc849-s5hd7\" (UID: \"a07ebe6a-ff42-4584-8503-9afefb4bcee1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695341 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-oauth-serving-cert\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695376 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-etcd-service-ca\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695398 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695426 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695444 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-console-serving-cert\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695468 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e38c1fa-0767-4ade-86be-f890237f9c94-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695502 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695521 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-console-oauth-config\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695538 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-trusted-ca-bundle\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695556 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-config\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695580 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-serving-cert\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695607 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nblmx\" (UniqueName: \"kubernetes.io/projected/68f75634-8fb1-40a4-801d-6355d62d81f8-kube-api-access-nblmx\") pod \"downloads-747b44746d-glcdh\" (UID: \"68f75634-8fb1-40a4-801d-6355d62d81f8\") " pod="openshift-console/downloads-747b44746d-glcdh" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695644 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c76448af-1e86-4765-83a0-7d9cd39bd5a6-tmpfs\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.695686 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6a476be9-e3a0-47e4-ab8f-29a4601a9134-tmp-dir\") pod \"dns-operator-799b87ffcd-tvrx6\" (UID: \"6a476be9-e3a0-47e4-ab8f-29a4601a9134\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.696283 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-serving-cert\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.697101 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-config\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.697253 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rzmsr\" (UniqueName: \"kubernetes.io/projected/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-kube-api-access-rzmsr\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.697401 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2qmkl\" (UniqueName: \"kubernetes.io/projected/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-kube-api-access-2qmkl\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.697717 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gddnc\" (UniqueName: \"kubernetes.io/projected/684e8e97-32b5-46c7-b3e0-0d89c55d7214-kube-api-access-gddnc\") pod \"machine-config-controller-f9cdd68f7-cbp9q\" (UID: \"684e8e97-32b5-46c7-b3e0-0d89c55d7214\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.697747 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wglfk\" (UniqueName: \"kubernetes.io/projected/3e38c1fa-0767-4ade-86be-f890237f9c94-kube-api-access-wglfk\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.697776 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e8a0ac-421f-4300-8f7c-33e9128a0000-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.697834 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e38c1fa-0767-4ade-86be-f890237f9c94-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.698043 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.698059 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-srv-cert\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.698312 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/684e8e97-32b5-46c7-b3e0-0d89c55d7214-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-cbp9q\" (UID: \"684e8e97-32b5-46c7-b3e0-0d89c55d7214\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.698487 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqtw\" (UniqueName: \"kubernetes.io/projected/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-kube-api-access-2jqtw\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.698900 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a476be9-e3a0-47e4-ab8f-29a4601a9134-metrics-tls\") pod \"dns-operator-799b87ffcd-tvrx6\" (UID: \"6a476be9-e3a0-47e4-ab8f-29a4601a9134\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.699028 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.699615 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-trusted-ca\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.699755 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ngbdw\" (UniqueName: \"kubernetes.io/projected/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-kube-api-access-ngbdw\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.699905 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-etcd-client\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.700047 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/684e8e97-32b5-46c7-b3e0-0d89c55d7214-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-cbp9q\" (UID: \"684e8e97-32b5-46c7-b3e0-0d89c55d7214\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.700150 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e38c1fa-0767-4ade-86be-f890237f9c94-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.700306 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.700538 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-service-ca\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.700654 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4e8a0ac-421f-4300-8f7c-33e9128a0000-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.700349 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/684e8e97-32b5-46c7-b3e0-0d89c55d7214-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-cbp9q\" (UID: \"684e8e97-32b5-46c7-b3e0-0d89c55d7214\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.703139 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b4e8a0ac-421f-4300-8f7c-33e9128a0000-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.703442 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4e8a0ac-421f-4300-8f7c-33e9128a0000-config\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.703615 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4xr\" (UniqueName: \"kubernetes.io/projected/c76448af-1e86-4765-83a0-7d9cd39bd5a6-kube-api-access-zg4xr\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.703812 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xxp4n\" (UniqueName: \"kubernetes.io/projected/6a476be9-e3a0-47e4-ab8f-29a4601a9134-kube-api-access-xxp4n\") pod \"dns-operator-799b87ffcd-tvrx6\" (UID: \"6a476be9-e3a0-47e4-ab8f-29a4601a9134\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.703880 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-etcd-ca\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.703903 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-tmp\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.704126 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-tmp-dir\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.705133 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-tmp\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.705733 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-tmp-dir\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.714644 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.745062 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.755501 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.772773 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/684e8e97-32b5-46c7-b3e0-0d89c55d7214-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-cbp9q\" (UID: \"684e8e97-32b5-46c7-b3e0-0d89c55d7214\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:17 crc kubenswrapper[5108]: W0104 00:12:17.775299 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f0b110c_a11e_4e78_8e42_10c104fcf868.slice/crio-676bd188fbf15156647f39e022e94e46e8248a9478ac2f7afa644cdee617834b WatchSource:0}: Error finding container 676bd188fbf15156647f39e022e94e46e8248a9478ac2f7afa644cdee617834b: Status 404 returned error can't find the container with id 676bd188fbf15156647f39e022e94e46e8248a9478ac2f7afa644cdee617834b Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.776310 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.794306 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.803034 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.803992 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.806625 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-console-oauth-config\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.806759 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-trusted-ca-bundle\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.806908 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c76448af-1e86-4765-83a0-7d9cd39bd5a6-tmpfs\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807012 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e8a0ac-421f-4300-8f7c-33e9128a0000-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807399 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-srv-cert\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807493 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2jqtw\" (UniqueName: \"kubernetes.io/projected/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-kube-api-access-2jqtw\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807562 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807603 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-service-ca\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807627 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4e8a0ac-421f-4300-8f7c-33e9128a0000-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807668 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b4e8a0ac-421f-4300-8f7c-33e9128a0000-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807692 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4e8a0ac-421f-4300-8f7c-33e9128a0000-config\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807719 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zg4xr\" (UniqueName: \"kubernetes.io/projected/c76448af-1e86-4765-83a0-7d9cd39bd5a6-kube-api-access-zg4xr\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807800 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-console-config\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807862 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-oauth-serving-cert\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.807904 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-console-serving-cert\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.812086 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c76448af-1e86-4765-83a0-7d9cd39bd5a6-tmpfs\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.812416 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b4e8a0ac-421f-4300-8f7c-33e9128a0000-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.813304 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.817513 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-2vn7s"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.818340 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.818603 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-config\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.825636 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.826346 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.829214 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.831777 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.831989 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.832276 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.835408 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.835874 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.838465 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.848337 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.848816 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.868600 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.869547 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-jzcn5"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.869601 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29458080-vx5nr"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.869616 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.869628 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pppml"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.869640 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.869651 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-7llq6"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.869662 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.869674 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fsqx9"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.870582 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.872236 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-trusted-ca\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.874051 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.874092 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-h5ft9"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.874103 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-glcdh"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.874119 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5jjj4"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.878810 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.879422 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.879631 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2gzj6"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.894899 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897454 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-wl97g"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897527 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897547 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-shks7"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897566 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897581 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897613 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897630 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-bxnjs"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897645 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-j22zl"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897660 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897673 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-tvrx6"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897691 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897707 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nbqsh"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897730 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897744 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897758 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897774 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897790 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897805 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897825 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fsqx9"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897846 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-tptrl"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.897862 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gdrn8"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.898860 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.899379 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.900305 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-serving-cert\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.912079 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvq52"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.915368 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gdrn8" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.921897 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5jjj4"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.921940 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.921954 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.921970 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.921981 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.921994 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.922003 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-2vn7s"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.922015 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-rsjsp"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.922029 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.922040 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gdrn8"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.922275 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.923056 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-7llq6"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.926536 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.927036 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.929339 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-jzcn5"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.931383 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh"] Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.934178 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58sk\" (UniqueName: \"kubernetes.io/projected/c20962fb-7828-40e8-854e-09cf60a0becd-kube-api-access-b58sk\") pod \"cluster-samples-operator-6b564684c8-s77qp\" (UID: \"c20962fb-7828-40e8-854e-09cf60a0becd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.947528 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.959173 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvjk8\" (UniqueName: \"kubernetes.io/projected/47d021a5-d9a4-4860-9edd-02555049f552-kube-api-access-mvjk8\") pod \"apiserver-9ddfb9f55-h5ft9\" (UID: \"47d021a5-d9a4-4860-9edd-02555049f552\") " pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.971329 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkk4w\" (UniqueName: \"kubernetes.io/projected/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-kube-api-access-zkk4w\") pod \"controller-manager-65b6cccf98-pppml\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.984882 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.993445 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 04 00:12:17 crc kubenswrapper[5108]: I0104 00:12:17.994909 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfdt\" (UniqueName: \"kubernetes.io/projected/255cfe17-72db-413a-8baa-b17a27bb2531-kube-api-access-qzfdt\") pod \"kube-storage-version-migrator-operator-565b79b866-pzxxm\" (UID: \"255cfe17-72db-413a-8baa-b17a27bb2531\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.002833 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.015331 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.021267 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.038523 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f0b110c_a11e_4e78_8e42_10c104fcf868.slice/crio-163ce91e0369b2594bad68374ba7fc8f49f597542ab22b85edbf7bbd937a7f61.scope\": RecentStats: unable to find data in memory cache]" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.039437 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.041058 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.058873 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.073944 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e38c1fa-0767-4ade-86be-f890237f9c94-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.083510 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.095311 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.116731 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.147370 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.149582 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3e38c1fa-0767-4ade-86be-f890237f9c94-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.154553 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.176507 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.190252 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a476be9-e3a0-47e4-ab8f-29a4601a9134-metrics-tls\") pod \"dns-operator-799b87ffcd-tvrx6\" (UID: \"6a476be9-e3a0-47e4-ab8f-29a4601a9134\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.194721 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.215719 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.233873 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.256637 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.263513 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp"] Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.266349 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.269241 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" event={"ID":"af85dc64-1599-4534-8cc4-be005c8893c3","Type":"ContainerStarted","Data":"ea3ce2bbf87cf06f9e24dbf860bbfdb00c0c4b26fee413a69f508c97a812636b"} Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.276559 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.280403 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" event={"ID":"948d9eda-ff2a-4ee3-913b-6a3f19481ee5","Type":"ContainerStarted","Data":"fa4d9855513ccb156c301b8584b6b078b61193b46938dd74d61a46e7b0ef6a12"} Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.287898 5108 generic.go:358] "Generic (PLEG): container finished" podID="0f0b110c-a11e-4e78-8e42-10c104fcf868" containerID="163ce91e0369b2594bad68374ba7fc8f49f597542ab22b85edbf7bbd937a7f61" exitCode=0 Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.288346 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" event={"ID":"0f0b110c-a11e-4e78-8e42-10c104fcf868","Type":"ContainerDied","Data":"163ce91e0369b2594bad68374ba7fc8f49f597542ab22b85edbf7bbd937a7f61"} Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.288422 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" event={"ID":"0f0b110c-a11e-4e78-8e42-10c104fcf868","Type":"ContainerStarted","Data":"676bd188fbf15156647f39e022e94e46e8248a9478ac2f7afa644cdee617834b"} Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.296664 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.301639 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" event={"ID":"7578c202-c52d-4bd7-b125-b369e37a7cb7","Type":"ContainerStarted","Data":"d05b759a25d38ba6347b46f7b292ad0f2923ffb50961258241be6eac394407d4"} Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.301717 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" event={"ID":"7578c202-c52d-4bd7-b125-b369e37a7cb7","Type":"ContainerStarted","Data":"ff444c35d217e208e1c7de7edb9597ed038b619f28539c432d1a1c20ed5aabd2"} Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.304971 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-h5ft9"] Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.315146 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.315354 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" event={"ID":"21fce9b3-74a6-4ddd-9011-f891ea99e09c","Type":"ContainerStarted","Data":"04f9c8e1fe241312937affcd8b47446e23f1d021c606d405a1be74f82baf3abf"} Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.316158 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pppml"] Jan 04 00:12:18 crc kubenswrapper[5108]: W0104 00:12:18.320178 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47d021a5_d9a4_4860_9edd_02555049f552.slice/crio-d3568180fde6793f8364fc9c7633245bf609f4dbcf835ae3792434683f06460a WatchSource:0}: Error finding container d3568180fde6793f8364fc9c7633245bf609f4dbcf835ae3792434683f06460a: Status 404 returned error can't find the container with id d3568180fde6793f8364fc9c7633245bf609f4dbcf835ae3792434683f06460a Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.334773 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.372740 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf6hj\" (UniqueName: \"kubernetes.io/projected/0ed21f10-7015-400b-bd89-9b5ba497be04-kube-api-access-zf6hj\") pod \"oauth-openshift-66458b6674-bxnjs\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.406603 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6t8v\" (UniqueName: \"kubernetes.io/projected/52146c21-3246-4f94-b1ac-d912a24401ab-kube-api-access-p6t8v\") pod \"image-pruner-29458080-vx5nr\" (UID: \"52146c21-3246-4f94-b1ac-d912a24401ab\") " pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.410143 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d98b3678-6b19-4259-b726-bf6940b01cbf-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-tcglk\" (UID: \"d98b3678-6b19-4259-b726-bf6940b01cbf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.431657 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqw6\" (UniqueName: \"kubernetes.io/projected/aefe6a9a-7107-42ce-8a8c-dddb8b52fded-kube-api-access-mgqw6\") pod \"machine-approver-54c688565-srgq4\" (UID: \"aefe6a9a-7107-42ce-8a8c-dddb8b52fded\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.451879 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx79q\" (UniqueName: \"kubernetes.io/projected/bc782574-9478-4d61-a46b-b592c4b8a20d-kube-api-access-mx79q\") pod \"authentication-operator-7f5c659b84-4wfl4\" (UID: \"bc782574-9478-4d61-a46b-b592c4b8a20d\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.474089 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.475069 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-etcd-ca\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.475575 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx49k\" (UniqueName: \"kubernetes.io/projected/b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd-kube-api-access-nx49k\") pod \"router-default-68cf44c8b8-6nmg2\" (UID: \"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd\") " pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.495052 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.502319 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-serving-cert\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.517656 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.531971 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-etcd-client\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.532879 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm"] Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.533453 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.555228 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.555970 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-etcd-service-ca\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:18 crc kubenswrapper[5108]: W0104 00:12:18.566320 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod255cfe17_72db_413a_8baa_b17a27bb2531.slice/crio-c2c1ac5ebd5cd3ba22595fc3daa8525f0a74ecbc302e6ed79b21cf8225f93782 WatchSource:0}: Error finding container c2c1ac5ebd5cd3ba22595fc3daa8525f0a74ecbc302e6ed79b21cf8225f93782: Status 404 returned error can't find the container with id c2c1ac5ebd5cd3ba22595fc3daa8525f0a74ecbc302e6ed79b21cf8225f93782 Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.573576 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.595126 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.622529 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.623439 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.624747 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.628705 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-config\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.629338 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.631719 5108 request.go:752] "Waited before sending request" delay="1.008534508s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.634274 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.642644 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4e8a0ac-421f-4300-8f7c-33e9128a0000-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.654646 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.657664 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:18 crc kubenswrapper[5108]: W0104 00:12:18.658448 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaefe6a9a_7107_42ce_8a8c_dddb8b52fded.slice/crio-578dc34dc3c53ec7da6cd275d096e8bd66fe8e602540b8a30a2047a502724a40 WatchSource:0}: Error finding container 578dc34dc3c53ec7da6cd275d096e8bd66fe8e602540b8a30a2047a502724a40: Status 404 returned error can't find the container with id 578dc34dc3c53ec7da6cd275d096e8bd66fe8e602540b8a30a2047a502724a40 Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.669977 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.680814 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.680900 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.694718 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.701140 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4e8a0ac-421f-4300-8f7c-33e9128a0000-config\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.739951 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.757322 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.759895 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-console-config\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.775010 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.786477 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-console-serving-cert\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:18 crc kubenswrapper[5108]: W0104 00:12:18.787271 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb46b2db9_9cd3_4bd2_aa59_7ba4e54949bd.slice/crio-bb0e665d8356d57583cd0b3b04abafbedb77ababd530f6dc08fcdb7add33d47a WatchSource:0}: Error finding container bb0e665d8356d57583cd0b3b04abafbedb77ababd530f6dc08fcdb7add33d47a: Status 404 returned error can't find the container with id bb0e665d8356d57583cd0b3b04abafbedb77ababd530f6dc08fcdb7add33d47a Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.795437 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.803191 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-console-oauth-config\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.807666 5108 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.807771 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-trusted-ca-bundle podName:149cc7c1-09e7-4088-8c9c-b42e4ea2b604 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:19.307752974 +0000 UTC m=+113.296318060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-trusted-ca-bundle") pod "console-64d44f6ddf-shks7" (UID: "149cc7c1-09e7-4088-8c9c-b42e4ea2b604") : failed to sync configmap cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.809097 5108 configmap.go:193] Couldn't get configMap openshift-console/service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.809139 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-service-ca podName:149cc7c1-09e7-4088-8c9c-b42e4ea2b604 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:19.309129571 +0000 UTC m=+113.297694657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-service-ca") pod "console-64d44f6ddf-shks7" (UID: "149cc7c1-09e7-4088-8c9c-b42e4ea2b604") : failed to sync configmap cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.809159 5108 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.809184 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-srv-cert podName:c76448af-1e86-4765-83a0-7d9cd39bd5a6 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:19.309175332 +0000 UTC m=+113.297740418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-srv-cert") pod "olm-operator-5cdf44d969-8qhfw" (UID: "c76448af-1e86-4765-83a0-7d9cd39bd5a6") : failed to sync secret cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.809259 5108 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.809406 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-oauth-serving-cert podName:149cc7c1-09e7-4088-8c9c-b42e4ea2b604 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:19.309375548 +0000 UTC m=+113.297940634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-oauth-serving-cert") pod "console-64d44f6ddf-shks7" (UID: "149cc7c1-09e7-4088-8c9c-b42e4ea2b604") : failed to sync configmap cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.809269 5108 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: E0104 00:12:18.809457 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-profile-collector-cert podName:c76448af-1e86-4765-83a0-7d9cd39bd5a6 nodeName:}" failed. No retries permitted until 2026-01-04 00:12:19.30945067 +0000 UTC m=+113.298015756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-profile-collector-cert") pod "olm-operator-5cdf44d969-8qhfw" (UID: "c76448af-1e86-4765-83a0-7d9cd39bd5a6") : failed to sync secret cache: timed out waiting for the condition Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.815179 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.842370 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.868513 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.873501 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.921508 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.921595 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.934031 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.956153 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.975899 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 04 00:12:18 crc kubenswrapper[5108]: I0104 00:12:18.995021 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.018445 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.042151 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.063296 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.073650 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.095163 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.114943 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.139050 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.155110 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.253010 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.253961 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.254036 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk"] Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.254154 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.261654 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.304574 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.305939 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.316916 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.317707 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4"] Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.367703 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-oauth-serving-cert\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.367784 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-trusted-ca-bundle\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.367967 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-srv-cert\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.368038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.368071 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-service-ca\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.369660 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-service-ca\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.372081 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-trusted-ca-bundle\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.373757 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-oauth-serving-cert\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.378775 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29458080-vx5nr"] Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.391848 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75gpk\" (UniqueName: \"kubernetes.io/projected/a07ebe6a-ff42-4584-8503-9afefb4bcee1-kube-api-access-75gpk\") pod \"migrator-866fcbc849-s5hd7\" (UID: \"a07ebe6a-ff42-4584-8503-9afefb4bcee1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.400168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-srv-cert\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.403226 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" event={"ID":"0f0b110c-a11e-4e78-8e42-10c104fcf868","Type":"ContainerStarted","Data":"5ee271e93557c0bcb93155884a527933462a26b9db00020d604db7ba2f17849b"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.406666 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3e38c1fa-0767-4ade-86be-f890237f9c94-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.406756 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" event={"ID":"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd","Type":"ContainerStarted","Data":"f84fe7a7d62a08cb257003c5c695b33bf6b501fcfddccfcc3de42a8808b31d53"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.406827 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" event={"ID":"b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd","Type":"ContainerStarted","Data":"bb0e665d8356d57583cd0b3b04abafbedb77ababd530f6dc08fcdb7add33d47a"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.430874 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.432713 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c76448af-1e86-4765-83a0-7d9cd39bd5a6-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.439288 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nblmx\" (UniqueName: \"kubernetes.io/projected/68f75634-8fb1-40a4-801d-6355d62d81f8-kube-api-access-nblmx\") pod \"downloads-747b44746d-glcdh\" (UID: \"68f75634-8fb1-40a4-801d-6355d62d81f8\") " pod="openshift-console/downloads-747b44746d-glcdh" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.443099 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzmsr\" (UniqueName: \"kubernetes.io/projected/fe36c33b-eeaa-4b44-9ccd-d44131ccebce-kube-api-access-rzmsr\") pod \"console-operator-67c89758df-wl97g\" (UID: \"fe36c33b-eeaa-4b44-9ccd-d44131ccebce\") " pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:19 crc kubenswrapper[5108]: W0104 00:12:19.445832 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ed21f10_7015_400b_bd89_9b5ba497be04.slice/crio-e3cbca7b7073d07773ddebb451843f317eaed2d3c6976b7e16cf90380d2c3c84 WatchSource:0}: Error finding container e3cbca7b7073d07773ddebb451843f317eaed2d3c6976b7e16cf90380d2c3c84: Status 404 returned error can't find the container with id e3cbca7b7073d07773ddebb451843f317eaed2d3c6976b7e16cf90380d2c3c84 Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.447149 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-bxnjs"] Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.447686 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" event={"ID":"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b","Type":"ContainerStarted","Data":"f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.447799 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" event={"ID":"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b","Type":"ContainerStarted","Data":"50840408b1f20d99508f7074aff7d5636b278cd005db8632c8c04fc15714caff"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.449290 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" event={"ID":"c20962fb-7828-40e8-854e-09cf60a0becd","Type":"ContainerStarted","Data":"9dcd47253d8356c5297b678907f144282aaad684e29ed02514c6f0477a42bbf7"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.449331 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" event={"ID":"c20962fb-7828-40e8-854e-09cf60a0becd","Type":"ContainerStarted","Data":"405db5d1ddcd350c5e1efd1984095df2eded0ede730c8a6a6cbd339232c10527"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.450078 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qmkl\" (UniqueName: \"kubernetes.io/projected/5b38a4e7-457e-47c5-8fd6-2e67b92a3974-kube-api-access-2qmkl\") pod \"cluster-image-registry-operator-86c45576b9-96248\" (UID: \"5b38a4e7-457e-47c5-8fd6-2e67b92a3974\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.451262 5108 generic.go:358] "Generic (PLEG): container finished" podID="21fce9b3-74a6-4ddd-9011-f891ea99e09c" containerID="19ecf648c58f9bfcb5da1b965dc87b66d7fc3430c6c1245f246b2c5f27c71b77" exitCode=0 Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.451360 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" event={"ID":"21fce9b3-74a6-4ddd-9011-f891ea99e09c","Type":"ContainerDied","Data":"19ecf648c58f9bfcb5da1b965dc87b66d7fc3430c6c1245f246b2c5f27c71b77"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.453348 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" event={"ID":"d98b3678-6b19-4259-b726-bf6940b01cbf","Type":"ContainerStarted","Data":"fdeb88e930292bf05ce1f0d0a385bd1d7a249dfaf31acc933019d47134cd57b5"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.459300 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" event={"ID":"aefe6a9a-7107-42ce-8a8c-dddb8b52fded","Type":"ContainerStarted","Data":"56fb5b0f1288e84e45b1cbe9a3f2eac0376dcda10f0f768cb92c94f9729f6f66"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.459380 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" event={"ID":"aefe6a9a-7107-42ce-8a8c-dddb8b52fded","Type":"ContainerStarted","Data":"578dc34dc3c53ec7da6cd275d096e8bd66fe8e602540b8a30a2047a502724a40"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.460525 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" event={"ID":"255cfe17-72db-413a-8baa-b17a27bb2531","Type":"ContainerStarted","Data":"628e772e21f2ff6dd30523baf30f1b9e174f621cebcc062bbbfa605804177686"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.460553 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" event={"ID":"255cfe17-72db-413a-8baa-b17a27bb2531","Type":"ContainerStarted","Data":"c2c1ac5ebd5cd3ba22595fc3daa8525f0a74ecbc302e6ed79b21cf8225f93782"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.462136 5108 generic.go:358] "Generic (PLEG): container finished" podID="47d021a5-d9a4-4860-9edd-02555049f552" containerID="0929138464af381365105ba56b80c2c4f2897be18dfc349c213ad4b6a7d9386d" exitCode=0 Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.462260 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" event={"ID":"47d021a5-d9a4-4860-9edd-02555049f552","Type":"ContainerDied","Data":"0929138464af381365105ba56b80c2c4f2897be18dfc349c213ad4b6a7d9386d"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.462291 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" event={"ID":"47d021a5-d9a4-4860-9edd-02555049f552","Type":"ContainerStarted","Data":"d3568180fde6793f8364fc9c7633245bf609f4dbcf835ae3792434683f06460a"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.463781 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" event={"ID":"af85dc64-1599-4534-8cc4-be005c8893c3","Type":"ContainerStarted","Data":"019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.466091 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" event={"ID":"948d9eda-ff2a-4ee3-913b-6a3f19481ee5","Type":"ContainerStarted","Data":"2bf09b2495adb3a34ef2d6367dbe1611743b25ab9194d9475b0e7e273ed8d01d"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.466118 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" event={"ID":"948d9eda-ff2a-4ee3-913b-6a3f19481ee5","Type":"ContainerStarted","Data":"873e96d6f795727dd20109ed3937763f054bbd9a784efcd0807444af1408d43d"} Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.468047 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gddnc\" (UniqueName: \"kubernetes.io/projected/684e8e97-32b5-46c7-b3e0-0d89c55d7214-kube-api-access-gddnc\") pod \"machine-config-controller-f9cdd68f7-cbp9q\" (UID: \"684e8e97-32b5-46c7-b3e0-0d89c55d7214\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.484456 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.488073 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wglfk\" (UniqueName: \"kubernetes.io/projected/3e38c1fa-0767-4ade-86be-f890237f9c94-kube-api-access-wglfk\") pod \"ingress-operator-6b9cb4dbcf-wmv7m\" (UID: \"3e38c1fa-0767-4ade-86be-f890237f9c94\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.509520 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngbdw\" (UniqueName: \"kubernetes.io/projected/0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802-kube-api-access-ngbdw\") pod \"etcd-operator-69b85846b6-j22zl\" (UID: \"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.529614 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxp4n\" (UniqueName: \"kubernetes.io/projected/6a476be9-e3a0-47e4-ab8f-29a4601a9134-kube-api-access-xxp4n\") pod \"dns-operator-799b87ffcd-tvrx6\" (UID: \"6a476be9-e3a0-47e4-ab8f-29a4601a9134\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.533548 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.554151 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.575035 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.593999 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.614675 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.652186 5108 request.go:752] "Waited before sending request" delay="1.843507659s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.652444 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jqtw\" (UniqueName: \"kubernetes.io/projected/149cc7c1-09e7-4088-8c9c-b42e4ea2b604-kube-api-access-2jqtw\") pod \"console-64d44f6ddf-shks7\" (UID: \"149cc7c1-09e7-4088-8c9c-b42e4ea2b604\") " pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.666163 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.671427 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e8a0ac-421f-4300-8f7c-33e9128a0000-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-j8nb7\" (UID: \"b4e8a0ac-421f-4300-8f7c-33e9128a0000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.690884 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg4xr\" (UniqueName: \"kubernetes.io/projected/c76448af-1e86-4765-83a0-7d9cd39bd5a6-kube-api-access-zg4xr\") pod \"olm-operator-5cdf44d969-8qhfw\" (UID: \"c76448af-1e86-4765-83a0-7d9cd39bd5a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.694859 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.696846 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-glcdh" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.713751 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7"] Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.714899 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.717448 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:19 crc kubenswrapper[5108]: W0104 00:12:19.730424 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda07ebe6a_ff42_4584_8503_9afefb4bcee1.slice/crio-b880174e2f4d4fc7529455ae8185674a6837f4379e8d44aa8774659292a735f9 WatchSource:0}: Error finding container b880174e2f4d4fc7529455ae8185674a6837f4379e8d44aa8774659292a735f9: Status 404 returned error can't find the container with id b880174e2f4d4fc7529455ae8185674a6837f4379e8d44aa8774659292a735f9 Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.734736 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.739584 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.753834 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.764118 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.774466 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.778459 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.794755 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.795216 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.812408 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.816686 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.821711 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.834533 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.856750 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.875301 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.894456 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.899565 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.924972 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.935359 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.955179 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.973730 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 04 00:12:19 crc kubenswrapper[5108]: I0104 00:12:19.994556 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.021786 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.033472 5108 scope.go:117] "RemoveContainer" containerID="001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.033869 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.033900 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.033913 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.037966 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q"] Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.043894 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.052932 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.100561 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-pppml container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.101174 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" podUID="4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.106272 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.106579 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.113370 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.115675 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.135400 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.185705 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.186912 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.193337 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.222270 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.245520 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.262055 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.276051 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.359883 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.397778 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-glcdh"] Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.397835 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-wl97g"] Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.448888 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqq5\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-kube-api-access-wtqq5\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.448980 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.449072 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91a34cb-17f6-49fe-a5a3-5c391614ed39-config\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.449245 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c39a999-644f-43cd-b7e6-c7fd14281924-ca-trust-extracted\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.449269 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-tmp\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.449424 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c39a999-644f-43cd-b7e6-c7fd14281924-installation-pull-secrets\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.449444 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-certificates\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.450469 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-tls\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.450770 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwtd\" (UniqueName: \"kubernetes.io/projected/e91a34cb-17f6-49fe-a5a3-5c391614ed39-kube-api-access-mwwtd\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.450798 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-trusted-ca\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.450850 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e91a34cb-17f6-49fe-a5a3-5c391614ed39-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.450869 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e91a34cb-17f6-49fe-a5a3-5c391614ed39-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.450914 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.451027 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69l4r\" (UniqueName: \"kubernetes.io/projected/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-kube-api-access-69l4r\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: E0104 00:12:20.452242 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:20.952223751 +0000 UTC m=+114.940788837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.452734 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-bound-sa-token\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.453046 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.558278 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:20 crc kubenswrapper[5108]: E0104 00:12:20.558361 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.058311525 +0000 UTC m=+115.046876611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.569129 5108 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-pppml container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.569212 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" podUID="4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.573686 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtqq5\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-kube-api-access-wtqq5\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.573884 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-tmpfs\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574259 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczfm\" (UniqueName: \"kubernetes.io/projected/6728c02b-1d01-45db-96f0-69f1f699fcf0-kube-api-access-vczfm\") pod \"machine-config-server-2gzj6\" (UID: \"6728c02b-1d01-45db-96f0-69f1f699fcf0\") " pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574292 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/df9ddf01-bee2-4ba3-bba8-a6038b624504-tmpfs\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574318 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/12382f58-cdec-4d79-abf7-f9281092d8f0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-pn9xb\" (UID: \"12382f58-cdec-4d79-abf7-f9281092d8f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574364 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574392 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-socket-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574441 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa56c23c-aae4-4b37-a657-9622fa143fa6-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-nk4f2\" (UID: \"aa56c23c-aae4-4b37-a657-9622fa143fa6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574469 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/40ae343c-e956-4351-bcd6-311eeef3976c-metrics-tls\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574586 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91a34cb-17f6-49fe-a5a3-5c391614ed39-config\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574632 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm5vg\" (UniqueName: \"kubernetes.io/projected/2afd2e0a-36e5-4af7-a427-0893b7521e9d-kube-api-access-fm5vg\") pod \"service-ca-operator-5b9c976747-978t5\" (UID: \"2afd2e0a-36e5-4af7-a427-0893b7521e9d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574715 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-registration-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574737 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c39a999-644f-43cd-b7e6-c7fd14281924-ca-trust-extracted\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574752 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df9ddf01-bee2-4ba3-bba8-a6038b624504-webhook-cert\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-tmp\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574851 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-srv-cert\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.574895 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6728c02b-1d01-45db-96f0-69f1f699fcf0-certs\") pod \"machine-config-server-2gzj6\" (UID: \"6728c02b-1d01-45db-96f0-69f1f699fcf0\") " pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575030 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14a3d6fe-b87f-473d-b105-d2cf34343253-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575045 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2afd2e0a-36e5-4af7-a427-0893b7521e9d-serving-cert\") pod \"service-ca-operator-5b9c976747-978t5\" (UID: \"2afd2e0a-36e5-4af7-a427-0893b7521e9d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575089 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hscww\" (UniqueName: \"kubernetes.io/projected/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-kube-api-access-hscww\") pod \"collect-profiles-29458080-xfr7k\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575146 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c39a999-644f-43cd-b7e6-c7fd14281924-installation-pull-secrets\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575422 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-images\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575511 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-certificates\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575574 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6728c02b-1d01-45db-96f0-69f1f699fcf0-node-bootstrap-token\") pod \"machine-config-server-2gzj6\" (UID: \"6728c02b-1d01-45db-96f0-69f1f699fcf0\") " pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575611 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqcl\" (UniqueName: \"kubernetes.io/projected/df9ddf01-bee2-4ba3-bba8-a6038b624504-kube-api-access-xxqcl\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575636 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7dde2d02-01f5-44da-87e7-72ba520acaa5-signing-key\") pod \"service-ca-74545575db-2vn7s\" (UID: \"7dde2d02-01f5-44da-87e7-72ba520acaa5\") " pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575677 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-tls\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575691 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-plugins-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575787 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.575804 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvcbv\" (UniqueName: \"kubernetes.io/projected/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-kube-api-access-tvcbv\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.576162 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v2nz\" (UniqueName: \"kubernetes.io/projected/841a53bb-0876-4f9d-b4bf-b01da8e9307b-kube-api-access-5v2nz\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.576246 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-csi-data-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.576301 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df9ddf01-bee2-4ba3-bba8-a6038b624504-apiservice-cert\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.576361 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmc9c\" (UniqueName: \"kubernetes.io/projected/bc232cbd-783b-4787-bfd6-d814e7b2cd4f-kube-api-access-rmc9c\") pod \"ingress-canary-gdrn8\" (UID: \"bc232cbd-783b-4787-bfd6-d814e7b2cd4f\") " pod="openshift-ingress-canary/ingress-canary-gdrn8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.576400 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11fdfb17-4544-4c6a-b985-22de45dfaf04-serving-cert\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.576479 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-mountpoint-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.576501 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7dde2d02-01f5-44da-87e7-72ba520acaa5-signing-cabundle\") pod \"service-ca-74545575db-2vn7s\" (UID: \"7dde2d02-01f5-44da-87e7-72ba520acaa5\") " pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.576990 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcdk4\" (UniqueName: \"kubernetes.io/projected/7dde2d02-01f5-44da-87e7-72ba520acaa5-kube-api-access-xcdk4\") pod \"service-ca-74545575db-2vn7s\" (UID: \"7dde2d02-01f5-44da-87e7-72ba520acaa5\") " pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577045 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/14a3d6fe-b87f-473d-b105-d2cf34343253-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577126 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r727\" (UniqueName: \"kubernetes.io/projected/12382f58-cdec-4d79-abf7-f9281092d8f0-kube-api-access-8r727\") pod \"control-plane-machine-set-operator-75ffdb6fcd-pn9xb\" (UID: \"12382f58-cdec-4d79-abf7-f9281092d8f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577264 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cflrc\" (UniqueName: \"kubernetes.io/projected/103b9ed4-5d88-445c-9c56-e7144fcbb923-kube-api-access-cflrc\") pod \"multus-admission-controller-69db94689b-rsjsp\" (UID: \"103b9ed4-5d88-445c-9c56-e7144fcbb923\") " pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577289 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/14a3d6fe-b87f-473d-b105-d2cf34343253-ready\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577307 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-secret-volume\") pod \"collect-profiles-29458080-xfr7k\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577369 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mwwtd\" (UniqueName: \"kubernetes.io/projected/e91a34cb-17f6-49fe-a5a3-5c391614ed39-kube-api-access-mwwtd\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577387 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmpxn\" (UniqueName: \"kubernetes.io/projected/aa56c23c-aae4-4b37-a657-9622fa143fa6-kube-api-access-bmpxn\") pod \"package-server-manager-77f986bd66-nk4f2\" (UID: \"aa56c23c-aae4-4b37-a657-9622fa143fa6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577408 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-trusted-ca\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577426 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ae343c-e956-4351-bcd6-311eeef3976c-config-volume\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577485 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/103b9ed4-5d88-445c-9c56-e7144fcbb923-webhook-certs\") pod \"multus-admission-controller-69db94689b-rsjsp\" (UID: \"103b9ed4-5d88-445c-9c56-e7144fcbb923\") " pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577537 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e91a34cb-17f6-49fe-a5a3-5c391614ed39-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577553 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e91a34cb-17f6-49fe-a5a3-5c391614ed39-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577628 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj42n\" (UniqueName: \"kubernetes.io/projected/14a3d6fe-b87f-473d-b105-d2cf34343253-kube-api-access-jj42n\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577656 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577670 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8t8m\" (UniqueName: \"kubernetes.io/projected/40ae343c-e956-4351-bcd6-311eeef3976c-kube-api-access-z8t8m\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577774 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.577859 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11fdfb17-4544-4c6a-b985-22de45dfaf04-config\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.597260 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-certificates\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.598112 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-tmp\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: E0104 00:12:20.598781 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.098756295 +0000 UTC m=+115.087321381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.611669 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11fdfb17-4544-4c6a-b985-22de45dfaf04-kube-api-access\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.599502 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91a34cb-17f6-49fe-a5a3-5c391614ed39-config\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.606513 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e91a34cb-17f6-49fe-a5a3-5c391614ed39-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.608618 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612260 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-69l4r\" (UniqueName: \"kubernetes.io/projected/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-kube-api-access-69l4r\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612336 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/40ae343c-e956-4351-bcd6-311eeef3976c-tmp-dir\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612383 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afd2e0a-36e5-4af7-a427-0893b7521e9d-config\") pod \"service-ca-operator-5b9c976747-978t5\" (UID: \"2afd2e0a-36e5-4af7-a427-0893b7521e9d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612509 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612637 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-bound-sa-token\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612705 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxxqk\" (UniqueName: \"kubernetes.io/projected/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-kube-api-access-pxxqk\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612777 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-config-volume\") pod \"collect-profiles-29458080-xfr7k\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612827 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612861 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc232cbd-783b-4787-bfd6-d814e7b2cd4f-cert\") pod \"ingress-canary-gdrn8\" (UID: \"bc232cbd-783b-4787-bfd6-d814e7b2cd4f\") " pod="openshift-ingress-canary/ingress-canary-gdrn8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.612899 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/11fdfb17-4544-4c6a-b985-22de45dfaf04-tmp-dir\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.603380 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c39a999-644f-43cd-b7e6-c7fd14281924-ca-trust-extracted\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.626300 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e91a34cb-17f6-49fe-a5a3-5c391614ed39-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.626761 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.633638 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7" event={"ID":"a07ebe6a-ff42-4584-8503-9afefb4bcee1","Type":"ContainerStarted","Data":"b880174e2f4d4fc7529455ae8185674a6837f4379e8d44aa8774659292a735f9"} Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.633703 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" event={"ID":"0ed21f10-7015-400b-bd89-9b5ba497be04","Type":"ContainerStarted","Data":"e3cbca7b7073d07773ddebb451843f317eaed2d3c6976b7e16cf90380d2c3c84"} Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.633744 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" event={"ID":"aefe6a9a-7107-42ce-8a8c-dddb8b52fded","Type":"ContainerStarted","Data":"8717e2729bd35921b8657f1489358b1098664a1200e2c0a19e124aa535f33713"} Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.633760 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" event={"ID":"bc782574-9478-4d61-a46b-b592c4b8a20d","Type":"ContainerStarted","Data":"4ee616d73a8f869feaea95ecc8e54d483cfb3c51d10e7f9d5abe3becb0edc26d"} Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.633772 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29458080-vx5nr" event={"ID":"52146c21-3246-4f94-b1ac-d912a24401ab","Type":"ContainerStarted","Data":"eb2d938b22970aca7c792c0c2ea37c98ffc21de5bbfcf9fcba0ce3f60d03a92f"} Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.633786 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" event={"ID":"684e8e97-32b5-46c7-b3e0-0d89c55d7214","Type":"ContainerStarted","Data":"37691ae74a05b6ec8f71183e8a27365192fb16519da728d0da30262b44c912de"} Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.636770 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-trusted-ca\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.645883 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtqq5\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-kube-api-access-wtqq5\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.648598 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwwtd\" (UniqueName: \"kubernetes.io/projected/e91a34cb-17f6-49fe-a5a3-5c391614ed39-kube-api-access-mwwtd\") pod \"openshift-controller-manager-operator-686468bdd5-zgssx\" (UID: \"e91a34cb-17f6-49fe-a5a3-5c391614ed39\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.650344 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-tls\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.679172 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c39a999-644f-43cd-b7e6-c7fd14281924-installation-pull-secrets\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.683866 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.685068 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.685122 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.699676 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-69l4r\" (UniqueName: \"kubernetes.io/projected/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-kube-api-access-69l4r\") pod \"marketplace-operator-547dbd544d-tptrl\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.705464 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-bound-sa-token\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.715215 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:20 crc kubenswrapper[5108]: E0104 00:12:20.715655 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.215631903 +0000 UTC m=+115.204196989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.715749 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11fdfb17-4544-4c6a-b985-22de45dfaf04-config\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.715778 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11fdfb17-4544-4c6a-b985-22de45dfaf04-kube-api-access\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.715816 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/40ae343c-e956-4351-bcd6-311eeef3976c-tmp-dir\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.715841 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afd2e0a-36e5-4af7-a427-0893b7521e9d-config\") pod \"service-ca-operator-5b9c976747-978t5\" (UID: \"2afd2e0a-36e5-4af7-a427-0893b7521e9d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.715927 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716021 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pxxqk\" (UniqueName: \"kubernetes.io/projected/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-kube-api-access-pxxqk\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716056 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-config-volume\") pod \"collect-profiles-29458080-xfr7k\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716091 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc232cbd-783b-4787-bfd6-d814e7b2cd4f-cert\") pod \"ingress-canary-gdrn8\" (UID: \"bc232cbd-783b-4787-bfd6-d814e7b2cd4f\") " pod="openshift-ingress-canary/ingress-canary-gdrn8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716115 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/11fdfb17-4544-4c6a-b985-22de45dfaf04-tmp-dir\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716149 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-tmpfs\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716185 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vczfm\" (UniqueName: \"kubernetes.io/projected/6728c02b-1d01-45db-96f0-69f1f699fcf0-kube-api-access-vczfm\") pod \"machine-config-server-2gzj6\" (UID: \"6728c02b-1d01-45db-96f0-69f1f699fcf0\") " pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716222 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/df9ddf01-bee2-4ba3-bba8-a6038b624504-tmpfs\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716270 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/12382f58-cdec-4d79-abf7-f9281092d8f0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-pn9xb\" (UID: \"12382f58-cdec-4d79-abf7-f9281092d8f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716307 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716338 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-socket-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716402 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa56c23c-aae4-4b37-a657-9622fa143fa6-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-nk4f2\" (UID: \"aa56c23c-aae4-4b37-a657-9622fa143fa6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716419 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/40ae343c-e956-4351-bcd6-311eeef3976c-metrics-tls\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716441 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fm5vg\" (UniqueName: \"kubernetes.io/projected/2afd2e0a-36e5-4af7-a427-0893b7521e9d-kube-api-access-fm5vg\") pod \"service-ca-operator-5b9c976747-978t5\" (UID: \"2afd2e0a-36e5-4af7-a427-0893b7521e9d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716477 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-registration-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716496 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df9ddf01-bee2-4ba3-bba8-a6038b624504-webhook-cert\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716521 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-srv-cert\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716538 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6728c02b-1d01-45db-96f0-69f1f699fcf0-certs\") pod \"machine-config-server-2gzj6\" (UID: \"6728c02b-1d01-45db-96f0-69f1f699fcf0\") " pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716595 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14a3d6fe-b87f-473d-b105-d2cf34343253-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716617 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2afd2e0a-36e5-4af7-a427-0893b7521e9d-serving-cert\") pod \"service-ca-operator-5b9c976747-978t5\" (UID: \"2afd2e0a-36e5-4af7-a427-0893b7521e9d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.716639 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hscww\" (UniqueName: \"kubernetes.io/projected/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-kube-api-access-hscww\") pod \"collect-profiles-29458080-xfr7k\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.723175 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-images\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.723889 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-images\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.723923 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11fdfb17-4544-4c6a-b985-22de45dfaf04-config\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.724007 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6728c02b-1d01-45db-96f0-69f1f699fcf0-node-bootstrap-token\") pod \"machine-config-server-2gzj6\" (UID: \"6728c02b-1d01-45db-96f0-69f1f699fcf0\") " pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.724074 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xxqcl\" (UniqueName: \"kubernetes.io/projected/df9ddf01-bee2-4ba3-bba8-a6038b624504-kube-api-access-xxqcl\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.724129 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7dde2d02-01f5-44da-87e7-72ba520acaa5-signing-key\") pod \"service-ca-74545575db-2vn7s\" (UID: \"7dde2d02-01f5-44da-87e7-72ba520acaa5\") " pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.724173 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-plugins-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.724217 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.724546 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-socket-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.725540 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/40ae343c-e956-4351-bcd6-311eeef3976c-tmp-dir\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.726606 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tvcbv\" (UniqueName: \"kubernetes.io/projected/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-kube-api-access-tvcbv\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.731349 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5v2nz\" (UniqueName: \"kubernetes.io/projected/841a53bb-0876-4f9d-b4bf-b01da8e9307b-kube-api-access-5v2nz\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.731432 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-csi-data-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.731466 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df9ddf01-bee2-4ba3-bba8-a6038b624504-apiservice-cert\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.731534 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmc9c\" (UniqueName: \"kubernetes.io/projected/bc232cbd-783b-4787-bfd6-d814e7b2cd4f-kube-api-access-rmc9c\") pod \"ingress-canary-gdrn8\" (UID: \"bc232cbd-783b-4787-bfd6-d814e7b2cd4f\") " pod="openshift-ingress-canary/ingress-canary-gdrn8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.731661 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11fdfb17-4544-4c6a-b985-22de45dfaf04-serving-cert\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.731736 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-mountpoint-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.731803 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7dde2d02-01f5-44da-87e7-72ba520acaa5-signing-cabundle\") pod \"service-ca-74545575db-2vn7s\" (UID: \"7dde2d02-01f5-44da-87e7-72ba520acaa5\") " pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.731857 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcdk4\" (UniqueName: \"kubernetes.io/projected/7dde2d02-01f5-44da-87e7-72ba520acaa5-kube-api-access-xcdk4\") pod \"service-ca-74545575db-2vn7s\" (UID: \"7dde2d02-01f5-44da-87e7-72ba520acaa5\") " pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.731897 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/14a3d6fe-b87f-473d-b105-d2cf34343253-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.732050 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8r727\" (UniqueName: \"kubernetes.io/projected/12382f58-cdec-4d79-abf7-f9281092d8f0-kube-api-access-8r727\") pod \"control-plane-machine-set-operator-75ffdb6fcd-pn9xb\" (UID: \"12382f58-cdec-4d79-abf7-f9281092d8f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.732152 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cflrc\" (UniqueName: \"kubernetes.io/projected/103b9ed4-5d88-445c-9c56-e7144fcbb923-kube-api-access-cflrc\") pod \"multus-admission-controller-69db94689b-rsjsp\" (UID: \"103b9ed4-5d88-445c-9c56-e7144fcbb923\") " pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.732301 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/14a3d6fe-b87f-473d-b105-d2cf34343253-ready\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.732394 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-secret-volume\") pod \"collect-profiles-29458080-xfr7k\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.734702 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afd2e0a-36e5-4af7-a427-0893b7521e9d-config\") pod \"service-ca-operator-5b9c976747-978t5\" (UID: \"2afd2e0a-36e5-4af7-a427-0893b7521e9d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.734762 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bmpxn\" (UniqueName: \"kubernetes.io/projected/aa56c23c-aae4-4b37-a657-9622fa143fa6-kube-api-access-bmpxn\") pod \"package-server-manager-77f986bd66-nk4f2\" (UID: \"aa56c23c-aae4-4b37-a657-9622fa143fa6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.734885 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ae343c-e956-4351-bcd6-311eeef3976c-config-volume\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.734942 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/103b9ed4-5d88-445c-9c56-e7144fcbb923-webhook-certs\") pod \"multus-admission-controller-69db94689b-rsjsp\" (UID: \"103b9ed4-5d88-445c-9c56-e7144fcbb923\") " pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.735037 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jj42n\" (UniqueName: \"kubernetes.io/projected/14a3d6fe-b87f-473d-b105-d2cf34343253-kube-api-access-jj42n\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.735066 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.735111 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z8t8m\" (UniqueName: \"kubernetes.io/projected/40ae343c-e956-4351-bcd6-311eeef3976c-kube-api-access-z8t8m\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.736749 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40ae343c-e956-4351-bcd6-311eeef3976c-config-volume\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: E0104 00:12:20.737387 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.237296331 +0000 UTC m=+115.225861417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.758913 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-csi-data-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.761885 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.762533 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-config-volume\") pod \"collect-profiles-29458080-xfr7k\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.767723 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/11fdfb17-4544-4c6a-b985-22de45dfaf04-tmp-dir\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.767824 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-registration-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.768186 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-tmpfs\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.777152 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/df9ddf01-bee2-4ba3-bba8-a6038b624504-tmpfs\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.778234 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa56c23c-aae4-4b37-a657-9622fa143fa6-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-nk4f2\" (UID: \"aa56c23c-aae4-4b37-a657-9622fa143fa6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.779389 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/103b9ed4-5d88-445c-9c56-e7144fcbb923-webhook-certs\") pod \"multus-admission-controller-69db94689b-rsjsp\" (UID: \"103b9ed4-5d88-445c-9c56-e7144fcbb923\") " pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.781667 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-mountpoint-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.786727 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/14a3d6fe-b87f-473d-b105-d2cf34343253-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.790485 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14a3d6fe-b87f-473d-b105-d2cf34343253-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.795449 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7dde2d02-01f5-44da-87e7-72ba520acaa5-signing-cabundle\") pod \"service-ca-74545575db-2vn7s\" (UID: \"7dde2d02-01f5-44da-87e7-72ba520acaa5\") " pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.796852 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/40ae343c-e956-4351-bcd6-311eeef3976c-metrics-tls\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.797419 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc232cbd-783b-4787-bfd6-d814e7b2cd4f-cert\") pod \"ingress-canary-gdrn8\" (UID: \"bc232cbd-783b-4787-bfd6-d814e7b2cd4f\") " pod="openshift-ingress-canary/ingress-canary-gdrn8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.804095 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.804853 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/14a3d6fe-b87f-473d-b105-d2cf34343253-ready\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.805354 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/841a53bb-0876-4f9d-b4bf-b01da8e9307b-plugins-dir\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.805440 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6728c02b-1d01-45db-96f0-69f1f699fcf0-certs\") pod \"machine-config-server-2gzj6\" (UID: \"6728c02b-1d01-45db-96f0-69f1f699fcf0\") " pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.805790 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.810364 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-j22zl"] Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.811621 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.811762 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df9ddf01-bee2-4ba3-bba8-a6038b624504-webhook-cert\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.813886 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df9ddf01-bee2-4ba3-bba8-a6038b624504-apiservice-cert\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.825182 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-srv-cert\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.825231 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxxqk\" (UniqueName: \"kubernetes.io/projected/bef2c9f7-de7a-4b8b-a712-36bb05ee31e0-kube-api-access-pxxqk\") pod \"catalog-operator-75ff9f647d-8vtr8\" (UID: \"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.825271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.825651 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2afd2e0a-36e5-4af7-a427-0893b7521e9d-serving-cert\") pod \"service-ca-operator-5b9c976747-978t5\" (UID: \"2afd2e0a-36e5-4af7-a427-0893b7521e9d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.829316 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6728c02b-1d01-45db-96f0-69f1f699fcf0-node-bootstrap-token\") pod \"machine-config-server-2gzj6\" (UID: \"6728c02b-1d01-45db-96f0-69f1f699fcf0\") " pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.835670 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvcbv\" (UniqueName: \"kubernetes.io/projected/8de3007f-731e-4a3e-84ba-c6a1fcbb8641-kube-api-access-tvcbv\") pod \"machine-config-operator-67c9d58cbb-42gmr\" (UID: \"8de3007f-731e-4a3e-84ba-c6a1fcbb8641\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.835942 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/12382f58-cdec-4d79-abf7-f9281092d8f0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-pn9xb\" (UID: \"12382f58-cdec-4d79-abf7-f9281092d8f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.836509 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-secret-volume\") pod \"collect-profiles-29458080-xfr7k\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.836867 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:20 crc kubenswrapper[5108]: E0104 00:12:20.838355 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.337534117 +0000 UTC m=+115.326099203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.838743 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7dde2d02-01f5-44da-87e7-72ba520acaa5-signing-key\") pod \"service-ca-74545575db-2vn7s\" (UID: \"7dde2d02-01f5-44da-87e7-72ba520acaa5\") " pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.853921 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v2nz\" (UniqueName: \"kubernetes.io/projected/841a53bb-0876-4f9d-b4bf-b01da8e9307b-kube-api-access-5v2nz\") pod \"csi-hostpathplugin-5jjj4\" (UID: \"841a53bb-0876-4f9d-b4bf-b01da8e9307b\") " pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.877089 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmpxn\" (UniqueName: \"kubernetes.io/projected/aa56c23c-aae4-4b37-a657-9622fa143fa6-kube-api-access-bmpxn\") pod \"package-server-manager-77f986bd66-nk4f2\" (UID: \"aa56c23c-aae4-4b37-a657-9622fa143fa6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.877955 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm5vg\" (UniqueName: \"kubernetes.io/projected/2afd2e0a-36e5-4af7-a427-0893b7521e9d-kube-api-access-fm5vg\") pod \"service-ca-operator-5b9c976747-978t5\" (UID: \"2afd2e0a-36e5-4af7-a427-0893b7521e9d\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.894328 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vczfm\" (UniqueName: \"kubernetes.io/projected/6728c02b-1d01-45db-96f0-69f1f699fcf0-kube-api-access-vczfm\") pod \"machine-config-server-2gzj6\" (UID: \"6728c02b-1d01-45db-96f0-69f1f699fcf0\") " pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.914776 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj42n\" (UniqueName: \"kubernetes.io/projected/14a3d6fe-b87f-473d-b105-d2cf34343253-kube-api-access-jj42n\") pod \"cni-sysctl-allowlist-ds-hvq52\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.943554 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmc9c\" (UniqueName: \"kubernetes.io/projected/bc232cbd-783b-4787-bfd6-d814e7b2cd4f-kube-api-access-rmc9c\") pod \"ingress-canary-gdrn8\" (UID: \"bc232cbd-783b-4787-bfd6-d814e7b2cd4f\") " pod="openshift-ingress-canary/ingress-canary-gdrn8" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.946884 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:20 crc kubenswrapper[5108]: E0104 00:12:20.947604 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.447578378 +0000 UTC m=+115.436143464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.957767 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8t8m\" (UniqueName: \"kubernetes.io/projected/40ae343c-e956-4351-bcd6-311eeef3976c-kube-api-access-z8t8m\") pod \"dns-default-fsqx9\" (UID: \"40ae343c-e956-4351-bcd6-311eeef3976c\") " pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.965283 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.989957 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxqcl\" (UniqueName: \"kubernetes.io/projected/df9ddf01-bee2-4ba3-bba8-a6038b624504-kube-api-access-xxqcl\") pod \"packageserver-7d4fc7d867-5mch2\" (UID: \"df9ddf01-bee2-4ba3-bba8-a6038b624504\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:20 crc kubenswrapper[5108]: I0104 00:12:20.994331 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.016286 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.048971 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.049218 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.549119109 +0000 UTC m=+115.537684195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.050275 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.051116 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.550906207 +0000 UTC m=+115.539471293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.063720 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.067211 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcdk4\" (UniqueName: \"kubernetes.io/projected/7dde2d02-01f5-44da-87e7-72ba520acaa5-kube-api-access-xcdk4\") pod \"service-ca-74545575db-2vn7s\" (UID: \"7dde2d02-01f5-44da-87e7-72ba520acaa5\") " pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.086638 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.105860 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2gzj6" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.125787 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gdrn8" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.128350 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r727\" (UniqueName: \"kubernetes.io/projected/12382f58-cdec-4d79-abf7-f9281092d8f0-kube-api-access-8r727\") pod \"control-plane-machine-set-operator-75ffdb6fcd-pn9xb\" (UID: \"12382f58-cdec-4d79-abf7-f9281092d8f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.129406 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hscww\" (UniqueName: \"kubernetes.io/projected/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-kube-api-access-hscww\") pod \"collect-profiles-29458080-xfr7k\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.141250 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.145518 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.146982 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cflrc\" (UniqueName: \"kubernetes.io/projected/103b9ed4-5d88-445c-9c56-e7144fcbb923-kube-api-access-cflrc\") pod \"multus-admission-controller-69db94689b-rsjsp\" (UID: \"103b9ed4-5d88-445c-9c56-e7144fcbb923\") " pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.151460 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.152471 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.652446028 +0000 UTC m=+115.641011124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.154377 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m"] Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.155445 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-fc5v8" podStartSLOduration=93.155420319 podStartE2EDuration="1m33.155420319s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:21.151765519 +0000 UTC m=+115.140330615" watchObservedRunningTime="2026-01-04 00:12:21.155420319 +0000 UTC m=+115.143985415" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.158853 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.163362 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11fdfb17-4544-4c6a-b985-22de45dfaf04-serving-cert\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.168931 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248"] Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.172415 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw"] Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.172539 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-tvrx6"] Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.173811 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-shks7"] Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.175428 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7"] Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.177001 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11fdfb17-4544-4c6a-b985-22de45dfaf04-kube-api-access\") pod \"kube-apiserver-operator-575994946d-v9rxg\" (UID: \"11fdfb17-4544-4c6a-b985-22de45dfaf04\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.199949 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" Jan 04 00:12:21 crc kubenswrapper[5108]: W0104 00:12:21.223694 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b38a4e7_457e_47c5_8fd6_2e67b92a3974.slice/crio-694d6e119909d9e4df123f35eeb55253e45a68c65f9fb948c71607868d3f1e58 WatchSource:0}: Error finding container 694d6e119909d9e4df123f35eeb55253e45a68c65f9fb948c71607868d3f1e58: Status 404 returned error can't find the container with id 694d6e119909d9e4df123f35eeb55253e45a68c65f9fb948c71607868d3f1e58 Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.228042 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-2vn7s" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.237473 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.247051 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.257182 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.257752 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.757735121 +0000 UTC m=+115.746300207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.365890 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.366078 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.866041355 +0000 UTC m=+115.854606441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.366863 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.367441 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.867419892 +0000 UTC m=+115.855984978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.407449 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.472349 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.473509 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:21.973475686 +0000 UTC m=+115.962040772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.577215 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.577737 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:22.077718409 +0000 UTC m=+116.066283495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.714848 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.721105 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.723330 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:22.223298687 +0000 UTC m=+116.211863773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.724943 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.725972 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:22.225951869 +0000 UTC m=+116.214516955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.732263 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.882692 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" event={"ID":"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802","Type":"ContainerStarted","Data":"a965487efcba908bef2a901a6b44e5e00cbf8c249a29fb33774744c6e913f512"} Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.903276 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:21 crc kubenswrapper[5108]: E0104 00:12:21.904148 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:22.404115823 +0000 UTC m=+116.392680909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:21 crc kubenswrapper[5108]: I0104 00:12:21.926060 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fsqx9"] Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.008589 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.009234 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:22.5092177 +0000 UTC m=+116.497782786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.037442 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" event={"ID":"b4e8a0ac-421f-4300-8f7c-33e9128a0000","Type":"ContainerStarted","Data":"01164133be5a37b17662e161af182a2282ff84d44ae207b90fee3766117b93c6"} Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.061283 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" event={"ID":"c76448af-1e86-4765-83a0-7d9cd39bd5a6","Type":"ContainerStarted","Data":"a0f1147f98d8e7102238740ddf94a5964127c33e73eee1ab2f97219029d57fb2"} Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.078739 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-tptrl"] Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.102074 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" event={"ID":"14a3d6fe-b87f-473d-b105-d2cf34343253","Type":"ContainerStarted","Data":"2102134d8e6db15d5ff404098fbd961aedc0b73f7f3b7fec97d5582cd3a49f84"} Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.113322 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.113622 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:22.613598778 +0000 UTC m=+116.602163864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.113740 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" event={"ID":"6a476be9-e3a0-47e4-ab8f-29a4601a9134","Type":"ContainerStarted","Data":"bd926527861df4ecdd844afd11f8b7610686fce8e7b5dbd798fdb9d7ade0bc4a"} Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.124123 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-wl97g" event={"ID":"fe36c33b-eeaa-4b44-9ccd-d44131ccebce","Type":"ContainerStarted","Data":"620b06f4f6edfd339362e9f2ac9c70dbfd6ca3da8c50f00db389717442f1c4bd"} Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.141713 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.175255 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-glcdh" event={"ID":"68f75634-8fb1-40a4-801d-6355d62d81f8","Type":"ContainerStarted","Data":"471e7b3e606af77bf65997aaba1e33d4e43ad417f8829c68cb21fca3cbf800f3"} Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.212693 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-shks7" event={"ID":"149cc7c1-09e7-4088-8c9c-b42e4ea2b604","Type":"ContainerStarted","Data":"0b889b704b4f9b3334d96713bd5f3da90f347fd8f3139e01e713d3ae6dce6d15"} Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.216290 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.217279 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:22.717257496 +0000 UTC m=+116.705822572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.226938 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" event={"ID":"3e38c1fa-0767-4ade-86be-f890237f9c94","Type":"ContainerStarted","Data":"89e440bdefdb774e5ee3df84c15f01b210ca0534f8761f2557356697fd7b7f39"} Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.230656 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" event={"ID":"5b38a4e7-457e-47c5-8fd6-2e67b92a3974","Type":"ContainerStarted","Data":"694d6e119909d9e4df123f35eeb55253e45a68c65f9fb948c71607868d3f1e58"} Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.251062 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k"] Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.261431 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-pzxxm" podStartSLOduration=94.261415827 podStartE2EDuration="1m34.261415827s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:22.259294439 +0000 UTC m=+116.247859535" watchObservedRunningTime="2026-01-04 00:12:22.261415827 +0000 UTC m=+116.249980913" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.289214 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-srgq4" podStartSLOduration=94.28915894 podStartE2EDuration="1m34.28915894s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:22.284169455 +0000 UTC m=+116.272734531" watchObservedRunningTime="2026-01-04 00:12:22.28915894 +0000 UTC m=+116.277724026" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.314063 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-jzcn5" podStartSLOduration=93.314040677 podStartE2EDuration="1m33.314040677s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:22.311989252 +0000 UTC m=+116.300554338" watchObservedRunningTime="2026-01-04 00:12:22.314040677 +0000 UTC m=+116.302605763" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.317374 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.319239 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:22.819191108 +0000 UTC m=+116.807756194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.420567 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.420999 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:22.920979665 +0000 UTC m=+116.909544751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: W0104 00:12:22.423653 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6728c02b_1d01_45db_96f0_69f1f699fcf0.slice/crio-46b8dbb55389072ad0420a2736b05459ad7d346ea9bda1f0789e0a175d2b11ae WatchSource:0}: Error finding container 46b8dbb55389072ad0420a2736b05459ad7d346ea9bda1f0789e0a175d2b11ae: Status 404 returned error can't find the container with id 46b8dbb55389072ad0420a2736b05459ad7d346ea9bda1f0789e0a175d2b11ae Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.434419 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podStartSLOduration=94.434390449 podStartE2EDuration="1m34.434390449s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:22.38070388 +0000 UTC m=+116.369268966" watchObservedRunningTime="2026-01-04 00:12:22.434390449 +0000 UTC m=+116.422955535" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.508923 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49434: no serving certificate available for the kubelet" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.523698 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.524407 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.024386096 +0000 UTC m=+117.012951172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.528118 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" podStartSLOduration=93.528080236 podStartE2EDuration="1m33.528080236s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:22.510136288 +0000 UTC m=+116.498701364" watchObservedRunningTime="2026-01-04 00:12:22.528080236 +0000 UTC m=+116.516645322" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.528530 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" podStartSLOduration=94.528525018 podStartE2EDuration="1m34.528525018s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:22.477876111 +0000 UTC m=+116.466441217" watchObservedRunningTime="2026-01-04 00:12:22.528525018 +0000 UTC m=+116.517090104" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.565129 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" podStartSLOduration=94.565104352 podStartE2EDuration="1m34.565104352s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:22.564303251 +0000 UTC m=+116.552868357" watchObservedRunningTime="2026-01-04 00:12:22.565104352 +0000 UTC m=+116.553669438" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.600848 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49438: no serving certificate available for the kubelet" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.625885 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.626361 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.126340928 +0000 UTC m=+117.114906014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.699331 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49448: no serving certificate available for the kubelet" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.726784 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.727236 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.22721106 +0000 UTC m=+117.215776146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.727400 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.727860 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.227853298 +0000 UTC m=+117.216418384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.733669 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49462: no serving certificate available for the kubelet" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.786636 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.809097 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49466: no serving certificate available for the kubelet" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.810918 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.810983 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.820464 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-bxnjs container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.820531 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" podUID="0ed21f10-7015-400b-bd89-9b5ba497be04" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.835723 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.836067 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.336005898 +0000 UTC m=+117.324570984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.836461 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.837257 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.337234381 +0000 UTC m=+117.325799467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.877756 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" podStartSLOduration=94.877721922 podStartE2EDuration="1m34.877721922s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:22.87211249 +0000 UTC m=+116.860677576" watchObservedRunningTime="2026-01-04 00:12:22.877721922 +0000 UTC m=+116.866287028" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.916405 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.940007 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49468: no serving certificate available for the kubelet" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.946734 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:22 crc kubenswrapper[5108]: E0104 00:12:22.948061 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.448034354 +0000 UTC m=+117.436599440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.951127 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" podStartSLOduration=94.951088007 podStartE2EDuration="1m34.951088007s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:22.944015094 +0000 UTC m=+116.932580190" watchObservedRunningTime="2026-01-04 00:12:22.951088007 +0000 UTC m=+116.939653093" Jan 04 00:12:22 crc kubenswrapper[5108]: I0104 00:12:22.979861 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.017919 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49480: no serving certificate available for the kubelet" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.020689 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29458080-vx5nr" podStartSLOduration=95.020661248 podStartE2EDuration="1m35.020661248s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:23.009972437 +0000 UTC m=+116.998537523" watchObservedRunningTime="2026-01-04 00:12:23.020661248 +0000 UTC m=+117.009226344" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.024336 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.054047 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:23 crc kubenswrapper[5108]: E0104 00:12:23.056931 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.556902753 +0000 UTC m=+117.545468029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.093039 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" podStartSLOduration=95.093013964 podStartE2EDuration="1m35.093013964s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:23.056750739 +0000 UTC m=+117.045315845" watchObservedRunningTime="2026-01-04 00:12:23.093013964 +0000 UTC m=+117.081579050" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.124294 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gdrn8"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.145714 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49484: no serving certificate available for the kubelet" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.149717 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" podStartSLOduration=94.149687926 podStartE2EDuration="1m34.149687926s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:23.128961512 +0000 UTC m=+117.117526608" watchObservedRunningTime="2026-01-04 00:12:23.149687926 +0000 UTC m=+117.138253012" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.157256 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:23 crc kubenswrapper[5108]: E0104 00:12:23.157747 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.657722394 +0000 UTC m=+117.646287470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.204573 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.212257 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.260186 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:23 crc kubenswrapper[5108]: E0104 00:12:23.260720 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.760701383 +0000 UTC m=+117.749266469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.268879 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-s77qp" event={"ID":"c20962fb-7828-40e8-854e-09cf60a0becd","Type":"ContainerStarted","Data":"3f2a069fa900a9e66830cf59e20310b0801657c1536f163c0777473ee9c080fc"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.305454 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" event={"ID":"bc782574-9478-4d61-a46b-b592c4b8a20d","Type":"ContainerStarted","Data":"468884b16234c78ffc1c58663876a6c2e771455815180132efa2145e0cdebe3d"} Jan 04 00:12:23 crc kubenswrapper[5108]: W0104 00:12:23.307286 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11fdfb17_4544_4c6a_b985_22de45dfaf04.slice/crio-52a8df9b252b66203be15d037dd583819caffd59db7b5410a96fed5db918eecc WatchSource:0}: Error finding container 52a8df9b252b66203be15d037dd583819caffd59db7b5410a96fed5db918eecc: Status 404 returned error can't find the container with id 52a8df9b252b66203be15d037dd583819caffd59db7b5410a96fed5db918eecc Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.323615 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29458080-vx5nr" event={"ID":"52146c21-3246-4f94-b1ac-d912a24401ab","Type":"ContainerStarted","Data":"d6b836251db41e1dbad061050dc4cff7f1fea69385f48cb05b14c8335f0fae9e"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.327672 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.329690 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" event={"ID":"684e8e97-32b5-46c7-b3e0-0d89c55d7214","Type":"ContainerStarted","Data":"6e24027468c379ef401a16849cfc8e24128a586449eafa5f155b087630d21c75"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.341431 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" event={"ID":"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0","Type":"ContainerStarted","Data":"df02298865ea31c8fc9e93a53765420fb37b632ea15cd0ac85d12fc8326ba1e2"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.354631 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" event={"ID":"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024","Type":"ContainerStarted","Data":"6a635abb69e6ddbc0c89d227f0c47ebe327d77106eea5c4de78b028852fe6037"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.367013 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:23 crc kubenswrapper[5108]: E0104 00:12:23.367338 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.867319232 +0000 UTC m=+117.855884318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.372792 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5jjj4"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.379169 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.384259 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" event={"ID":"47d021a5-d9a4-4860-9edd-02555049f552","Type":"ContainerStarted","Data":"13eac472833ef6e6c533e1ab15397179f9132e5538c2998bedc5c0052f859e56"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.388392 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-2vn7s"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.420236 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.450259 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7" event={"ID":"a07ebe6a-ff42-4584-8503-9afefb4bcee1","Type":"ContainerStarted","Data":"caa998d2e9bdd67cb1bfaf9c5473ed53937716b2e33921d89a0f33c2bc62f502"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.474712 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:23 crc kubenswrapper[5108]: E0104 00:12:23.475283 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:23.975254766 +0000 UTC m=+117.963819852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.489698 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" event={"ID":"0ed21f10-7015-400b-bd89-9b5ba497be04","Type":"ContainerStarted","Data":"1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.521096 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2gzj6" event={"ID":"6728c02b-1d01-45db-96f0-69f1f699fcf0","Type":"ContainerStarted","Data":"46b8dbb55389072ad0420a2736b05459ad7d346ea9bda1f0789e0a175d2b11ae"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.521410 5108 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-bxnjs container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.521467 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" podUID="0ed21f10-7015-400b-bd89-9b5ba497be04" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.552786 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-rsjsp"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.574141 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" event={"ID":"21fce9b3-74a6-4ddd-9011-f891ea99e09c","Type":"ContainerStarted","Data":"e85bf4592cd435095cdf2efc1fe8e0a311019ef1732698ae28e9af52088d0012"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.576269 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:23 crc kubenswrapper[5108]: E0104 00:12:23.577475 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:24.077457115 +0000 UTC m=+118.066022201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.602668 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-tcglk" event={"ID":"d98b3678-6b19-4259-b726-bf6940b01cbf","Type":"ContainerStarted","Data":"bfd388e4d182a17b9fd4c12634b7b6fbdb6d740dda4349cd83c8f0a0300d543c"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.614861 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.624902 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b96c4a7615d0a65347b947faa43f2ce0466226b8e218fb7f926e49d834809fa9"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.636062 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.662021 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4wfl4" podStartSLOduration=95.661998034 podStartE2EDuration="1m35.661998034s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:23.558687915 +0000 UTC m=+117.547253001" watchObservedRunningTime="2026-01-04 00:12:23.661998034 +0000 UTC m=+117.650563120" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.662463 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.666560 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" event={"ID":"12382f58-cdec-4d79-abf7-f9281092d8f0","Type":"ContainerStarted","Data":"38ff23f23ba812706734b54e4030f0bbb11b974f7c51837991376cccc3251047"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.670080 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fsqx9" event={"ID":"40ae343c-e956-4351-bcd6-311eeef3976c","Type":"ContainerStarted","Data":"a8b334064c6d55814e64963b2bcd89a6aae6c196e47c50c398ba7450b88d95af"} Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.686100 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr"] Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.688557 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:23 crc kubenswrapper[5108]: E0104 00:12:23.689070 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:24.189052759 +0000 UTC m=+118.177617845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.695124 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=29.695089784 podStartE2EDuration="29.695089784s" podCreationTimestamp="2026-01-04 00:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:23.690060416 +0000 UTC m=+117.678625512" watchObservedRunningTime="2026-01-04 00:12:23.695089784 +0000 UTC m=+117.683654880" Jan 04 00:12:23 crc kubenswrapper[5108]: W0104 00:12:23.721308 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod103b9ed4_5d88_445c_9c56_e7144fcbb923.slice/crio-4a7d79c7488fed712744144bd011a35568da7f847b51c17c419022b4fa79832e WatchSource:0}: Error finding container 4a7d79c7488fed712744144bd011a35568da7f847b51c17c419022b4fa79832e: Status 404 returned error can't find the container with id 4a7d79c7488fed712744144bd011a35568da7f847b51c17c419022b4fa79832e Jan 04 00:12:23 crc kubenswrapper[5108]: W0104 00:12:23.724973 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa56c23c_aae4_4b37_a657_9622fa143fa6.slice/crio-1dc808cacfc1b02e00ad6013957419071073b84c9d65775e2cb1b1cd19118f54 WatchSource:0}: Error finding container 1dc808cacfc1b02e00ad6013957419071073b84c9d65775e2cb1b1cd19118f54: Status 404 returned error can't find the container with id 1dc808cacfc1b02e00ad6013957419071073b84c9d65775e2cb1b1cd19118f54 Jan 04 00:12:23 crc kubenswrapper[5108]: W0104 00:12:23.726509 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de3007f_731e_4a3e_84ba_c6a1fcbb8641.slice/crio-7b129149259bcbddcc82afdd757927f4ca87da731484372d13ae4230d04104be WatchSource:0}: Error finding container 7b129149259bcbddcc82afdd757927f4ca87da731484372d13ae4230d04104be: Status 404 returned error can't find the container with id 7b129149259bcbddcc82afdd757927f4ca87da731484372d13ae4230d04104be Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.790163 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:23 crc kubenswrapper[5108]: E0104 00:12:23.791404 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:24.29135879 +0000 UTC m=+118.279923876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.816623 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49486: no serving certificate available for the kubelet" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.873757 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:23 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:23 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:23 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.873851 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:23 crc kubenswrapper[5108]: I0104 00:12:23.898330 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:23 crc kubenswrapper[5108]: E0104 00:12:23.898969 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:24.398946935 +0000 UTC m=+118.387512021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.000649 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.000940 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:24.500907757 +0000 UTC m=+118.489472853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.001485 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.001809 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:24.501802042 +0000 UTC m=+118.490367138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.226406 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.226944 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:24.726926741 +0000 UTC m=+118.715491817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.333095 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.334166 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:24.834148047 +0000 UTC m=+118.822713133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.464033 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.464556 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:24.964532101 +0000 UTC m=+118.953097187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.576852 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.577437 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.077418041 +0000 UTC m=+119.065983137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.731495 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.731695 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.231654014 +0000 UTC m=+119.220219100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.732248 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.733996 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:24 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:24 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:24 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.734162 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.736482 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.236454534 +0000 UTC m=+119.225019620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.859168 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.859396 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.359361015 +0000 UTC m=+119.347926101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.859693 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.860309 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.360301051 +0000 UTC m=+119.348866137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.874431 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7" event={"ID":"a07ebe6a-ff42-4584-8503-9afefb4bcee1","Type":"ContainerStarted","Data":"0eb53788bc9356b38a15da7d88afb7c2e5539ba08b0ebcc302d126bb8d6c60dd"} Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.904946 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" event={"ID":"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0","Type":"ContainerStarted","Data":"b1b8c96a8c86624320d423b4ff6aa970c0c0748e1f8035a61f1cb9f0ae7c88b7"} Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.939820 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" event={"ID":"b4e8a0ac-421f-4300-8f7c-33e9128a0000","Type":"ContainerStarted","Data":"27b247e97596a888e08f1b9b82ef386a8c546e10f0a54d30793117eab3f69912"} Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.941417 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-2vn7s" event={"ID":"7dde2d02-01f5-44da-87e7-72ba520acaa5","Type":"ContainerStarted","Data":"ec5a1a64aec4b57d2f8dad9dcdc5a5356a5159f922fdff259f76677e9f2338e1"} Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.943105 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" event={"ID":"14a3d6fe-b87f-473d-b105-d2cf34343253","Type":"ContainerStarted","Data":"fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e"} Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.944380 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-wl97g" event={"ID":"fe36c33b-eeaa-4b44-9ccd-d44131ccebce","Type":"ContainerStarted","Data":"aef50c7c70e87c2010751309911df857b19b90b7a92c2da4b0a9217274eef54a"} Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.945266 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" event={"ID":"103b9ed4-5d88-445c-9c56-e7144fcbb923","Type":"ContainerStarted","Data":"4a7d79c7488fed712744144bd011a35568da7f847b51c17c419022b4fa79832e"} Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.987436 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.987783 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.487736426 +0000 UTC m=+119.476301522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.988114 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:24 crc kubenswrapper[5108]: E0104 00:12:24.988579 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.488570878 +0000 UTC m=+119.477135964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.989457 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-shks7" event={"ID":"149cc7c1-09e7-4088-8c9c-b42e4ea2b604","Type":"ContainerStarted","Data":"ea8b9547c2ed365671c68de3927da5154d1e849caed2c1dff8e63cd764218c5a"} Jan 04 00:12:24 crc kubenswrapper[5108]: I0104 00:12:24.999303 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" event={"ID":"11fdfb17-4544-4c6a-b985-22de45dfaf04","Type":"ContainerStarted","Data":"52a8df9b252b66203be15d037dd583819caffd59db7b5410a96fed5db918eecc"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.018946 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" event={"ID":"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0","Type":"ContainerStarted","Data":"70a9bf32fb08c2500814857c3777f4739582b8acee4b984a1e2bd55f7693707b"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.024409 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" event={"ID":"e91a34cb-17f6-49fe-a5a3-5c391614ed39","Type":"ContainerStarted","Data":"26e5c6fc3ca6a92eed3f5b0f23b8c6b53d5f1a71ccedfb549144d02423d1529b"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.037444 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" event={"ID":"8de3007f-731e-4a3e-84ba-c6a1fcbb8641","Type":"ContainerStarted","Data":"7b129149259bcbddcc82afdd757927f4ca87da731484372d13ae4230d04104be"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.053391 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" event={"ID":"47d021a5-d9a4-4860-9edd-02555049f552","Type":"ContainerStarted","Data":"b2cd595827bb0053aa2bdea5452f08027090e66bd479b34d09d54703c76b3bbd"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.057133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" event={"ID":"0dd4bb82-e5af-4b6e-a6c3-d1e21ffe8802","Type":"ContainerStarted","Data":"e4c0534b9dd6a7ce007382f7cf49d7a323b49706a720a96f7e40abe6b84055ab"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.062527 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2gzj6" event={"ID":"6728c02b-1d01-45db-96f0-69f1f699fcf0","Type":"ContainerStarted","Data":"906d9a3455f3c3c40fbd725f5f90f5cf19114dd19019de8cb519e7182ba5be83"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.063848 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" event={"ID":"c76448af-1e86-4765-83a0-7d9cd39bd5a6","Type":"ContainerStarted","Data":"0f80f2e599ee094620840cf2c0534f36eae4683664b1a368b9b9393020c76b5b"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.064859 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" event={"ID":"841a53bb-0876-4f9d-b4bf-b01da8e9307b","Type":"ContainerStarted","Data":"d4d727b0f778adb2e4b330ad95203c7333f619f7f87469cf3adb290d6164ed56"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.083984 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" event={"ID":"2afd2e0a-36e5-4af7-a427-0893b7521e9d","Type":"ContainerStarted","Data":"5e1c9738092ca670e28a5d2537b35541b3d39843c02deefeb5a9c0ef3a950145"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.088861 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:25 crc kubenswrapper[5108]: E0104 00:12:25.089383 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.589358318 +0000 UTC m=+119.577923404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.098030 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gdrn8" event={"ID":"bc232cbd-783b-4787-bfd6-d814e7b2cd4f","Type":"ContainerStarted","Data":"4cea777930dd57d34b4d6b76c7dc9021b84f75a24db7962f15efadcef69005d3"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.101746 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" event={"ID":"6a476be9-e3a0-47e4-ab8f-29a4601a9134","Type":"ContainerStarted","Data":"cba18e8eec9b447e3fc556f0e0989e62c9d9b17f047c80be5f377b443f91b2eb"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.156700 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.156754 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.156765 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.191654 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:25 crc kubenswrapper[5108]: E0104 00:12:25.192190 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.692171834 +0000 UTC m=+119.680736930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.209498 5108 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-8qhfw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.209618 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" podUID="c76448af-1e86-4765-83a0-7d9cd39bd5a6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.209518 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-wl97g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.209715 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-wl97g" podUID="fe36c33b-eeaa-4b44-9ccd-d44131ccebce" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.326288 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-glcdh" event={"ID":"68f75634-8fb1-40a4-801d-6355d62d81f8","Type":"ContainerStarted","Data":"191206243a1a0f63cd7205d359d923c37cfe1f1de594f5553e0fe986f027155a"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.329109 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.329695 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-glcdh" Jan 04 00:12:25 crc kubenswrapper[5108]: E0104 00:12:25.330931 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.830903236 +0000 UTC m=+119.819468322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.336213 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.336575 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.369136 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-s5hd7" podStartSLOduration=97.369118264 podStartE2EDuration="1m37.369118264s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:25.366606015 +0000 UTC m=+119.355171101" watchObservedRunningTime="2026-01-04 00:12:25.369118264 +0000 UTC m=+119.357683350" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.433128 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:25 crc kubenswrapper[5108]: E0104 00:12:25.435009 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:25.934990745 +0000 UTC m=+119.923555831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.486791 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" event={"ID":"df9ddf01-bee2-4ba3-bba8-a6038b624504","Type":"ContainerStarted","Data":"39ebf49990eaa21a308943d9a7a46763bb294318530869a69fa682b281ed047f"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.496173 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" event={"ID":"3e38c1fa-0767-4ade-86be-f890237f9c94","Type":"ContainerStarted","Data":"366934cce2930139dc0be7030c6478b1ec2f8f7e1a134b6e61646594f1c8e866"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.499322 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49492: no serving certificate available for the kubelet" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.501058 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" event={"ID":"5b38a4e7-457e-47c5-8fd6-2e67b92a3974","Type":"ContainerStarted","Data":"29569b7694cceb9f747f141785f2dfaae8dc1f6c36ab5ab03f915bb28b231d9c"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.520571 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" event={"ID":"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024","Type":"ContainerStarted","Data":"02f9428cfbceb44b1090900540c7f7935bafef78343d58da19cfa46d929845b6"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.535209 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:25 crc kubenswrapper[5108]: E0104 00:12:25.535596 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.035573369 +0000 UTC m=+120.024138455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.537102 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-j22zl" podStartSLOduration=97.53707873 podStartE2EDuration="1m37.53707873s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:25.536412292 +0000 UTC m=+119.524977378" watchObservedRunningTime="2026-01-04 00:12:25.53707873 +0000 UTC m=+119.525643816" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.573378 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" event={"ID":"684e8e97-32b5-46c7-b3e0-0d89c55d7214","Type":"ContainerStarted","Data":"9e8b850842eb28ebfd9456159d531a688cb406b52d4dc4b03e0f58288b62db46"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.583088 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" event={"ID":"aa56c23c-aae4-4b37-a657-9622fa143fa6","Type":"ContainerStarted","Data":"1dc808cacfc1b02e00ad6013957419071073b84c9d65775e2cb1b1cd19118f54"} Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.595140 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.596408 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" podStartSLOduration=8.596371362 podStartE2EDuration="8.596371362s" podCreationTimestamp="2026-01-04 00:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:25.58415723 +0000 UTC m=+119.572722326" watchObservedRunningTime="2026-01-04 00:12:25.596371362 +0000 UTC m=+119.584936468" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.640545 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:25 crc kubenswrapper[5108]: E0104 00:12:25.648025 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.148003265 +0000 UTC m=+120.136568351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.672193 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" podStartSLOduration=96.672171554 podStartE2EDuration="1m36.672171554s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:25.67094478 +0000 UTC m=+119.659509866" watchObservedRunningTime="2026-01-04 00:12:25.672171554 +0000 UTC m=+119.660736640" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.700577 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:25 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:25 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:25 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.700678 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.700857 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.769528 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:25 crc kubenswrapper[5108]: E0104 00:12:25.770218 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.270175248 +0000 UTC m=+120.258740334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.771935 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-glcdh" podStartSLOduration=97.771918495 podStartE2EDuration="1m37.771918495s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:25.771902665 +0000 UTC m=+119.760467761" watchObservedRunningTime="2026-01-04 00:12:25.771918495 +0000 UTC m=+119.760483581" Jan 04 00:12:25 crc kubenswrapper[5108]: I0104 00:12:25.892937 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:25 crc kubenswrapper[5108]: E0104 00:12:25.893443 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.393423148 +0000 UTC m=+120.381988244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:25.998821 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:26 crc kubenswrapper[5108]: E0104 00:12:25.999176 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.499154182 +0000 UTC m=+120.487719268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.109501 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:26 crc kubenswrapper[5108]: E0104 00:12:26.109960 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.609942064 +0000 UTC m=+120.598507140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.136135 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" podStartSLOduration=98.136108726 podStartE2EDuration="1m38.136108726s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:26.134952354 +0000 UTC m=+120.123517450" watchObservedRunningTime="2026-01-04 00:12:26.136108726 +0000 UTC m=+120.124673812" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.137014 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2gzj6" podStartSLOduration=10.13700762 podStartE2EDuration="10.13700762s" podCreationTimestamp="2026-01-04 00:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:25.997656422 +0000 UTC m=+119.986221508" watchObservedRunningTime="2026-01-04 00:12:26.13700762 +0000 UTC m=+120.125572696" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.220153 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:26 crc kubenswrapper[5108]: E0104 00:12:26.220936 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.720913041 +0000 UTC m=+120.709478127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.287091 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-shks7" podStartSLOduration=98.28706565 podStartE2EDuration="1m38.28706565s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:26.281894019 +0000 UTC m=+120.270459105" watchObservedRunningTime="2026-01-04 00:12:26.28706565 +0000 UTC m=+120.275630736" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.324704 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.324795 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.324840 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.324870 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.324892 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.326391 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:26 crc kubenswrapper[5108]: E0104 00:12:26.335233 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.835190958 +0000 UTC m=+120.823756044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.413963 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-wl97g" podStartSLOduration=98.413931328 podStartE2EDuration="1m38.413931328s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:26.353098455 +0000 UTC m=+120.341663551" watchObservedRunningTime="2026-01-04 00:12:26.413931328 +0000 UTC m=+120.402496414" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.426040 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:26 crc kubenswrapper[5108]: E0104 00:12:26.431431 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:26.931387143 +0000 UTC m=+120.919952229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.466280 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.479760 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-j8nb7" podStartSLOduration=98.479735548 podStartE2EDuration="1m38.479735548s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:26.478883325 +0000 UTC m=+120.467448411" watchObservedRunningTime="2026-01-04 00:12:26.479735548 +0000 UTC m=+120.468300634" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.501262 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.510606 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.510805 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.531052 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.550678 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.550798 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:26 crc kubenswrapper[5108]: E0104 00:12:26.551506 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:27.051488629 +0000 UTC m=+121.040053715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.553028 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.595573 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.630872 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6feab616-6edc-4a90-8ee9-f5ae1c2e80c5-metrics-certs\") pod \"network-metrics-daemon-mlfqf\" (UID: \"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5\") " pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:26 crc kubenswrapper[5108]: E0104 00:12:26.667018 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:27.166990668 +0000 UTC m=+121.155555744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.666815 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.691054 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:26 crc kubenswrapper[5108]: E0104 00:12:26.691783 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:27.191761883 +0000 UTC m=+121.180326969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.748948 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:26 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:26 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:26 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.750104 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.766604 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" podStartSLOduration=98.766537025 podStartE2EDuration="1m38.766537025s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:26.736971411 +0000 UTC m=+120.725536497" watchObservedRunningTime="2026-01-04 00:12:26.766537025 +0000 UTC m=+120.755102111" Jan 04 00:12:26 crc kubenswrapper[5108]: I0104 00:12:26.899232 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:26 crc kubenswrapper[5108]: E0104 00:12:26.996349 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:27.496313642 +0000 UTC m=+121.484878728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.009278 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:27 crc kubenswrapper[5108]: E0104 00:12:27.010004 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:27.509979473 +0000 UTC m=+121.498544559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.113471 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:27 crc kubenswrapper[5108]: E0104 00:12:27.113969 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:27.61394284 +0000 UTC m=+121.602507926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.130742 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.135921 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" event={"ID":"bef2c9f7-de7a-4b8b-a712-36bb05ee31e0","Type":"ContainerStarted","Data":"2301543bbee685e8e27585e5657f2c01e985c704541b4743ff4e6fdeb1a5785a"} Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.137778 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.141308 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlfqf" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.155223 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-2vn7s" event={"ID":"7dde2d02-01f5-44da-87e7-72ba520acaa5","Type":"ContainerStarted","Data":"cbda74a40db8278bd049a9b33ac1866674ea96a978cb58a9e3d5ef0365849b21"} Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.158364 5108 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-8vtr8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.158458 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" podUID="bef2c9f7-de7a-4b8b-a712-36bb05ee31e0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.191369 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96248" podStartSLOduration=99.191332214 podStartE2EDuration="1m39.191332214s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:27.130815368 +0000 UTC m=+121.119380474" watchObservedRunningTime="2026-01-04 00:12:27.191332214 +0000 UTC m=+121.179897300" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.280070 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" event={"ID":"e91a34cb-17f6-49fe-a5a3-5c391614ed39","Type":"ContainerStarted","Data":"16141b8e53103de2bdfed66dce9984ca892d270941dc1a421fbfdd256a9a7d6b"} Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.281876 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:27 crc kubenswrapper[5108]: E0104 00:12:27.284190 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:27.784172928 +0000 UTC m=+121.772738014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.303665 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" event={"ID":"2afd2e0a-36e5-4af7-a427-0893b7521e9d","Type":"ContainerStarted","Data":"e7aa60235cbd877e7b0daa0cb85e44f82ca84774dbf98006f30418d6a86a7c49"} Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.305592 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-cbp9q" podStartSLOduration=98.305571019 podStartE2EDuration="1m38.305571019s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:27.19265122 +0000 UTC m=+121.181216316" watchObservedRunningTime="2026-01-04 00:12:27.305571019 +0000 UTC m=+121.294136115" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.305781 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" podStartSLOduration=98.305776865 podStartE2EDuration="1m38.305776865s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:27.305302202 +0000 UTC m=+121.293867298" watchObservedRunningTime="2026-01-04 00:12:27.305776865 +0000 UTC m=+121.294341951" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.342833 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gdrn8" event={"ID":"bc232cbd-783b-4787-bfd6-d814e7b2cd4f","Type":"ContainerStarted","Data":"41ce55644a99f99f1750b00d09c7cc6467381a63a4f3536ae52907e218c8fa0e"} Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.357641 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fsqx9" event={"ID":"40ae343c-e956-4351-bcd6-311eeef3976c","Type":"ContainerStarted","Data":"b44d1b45cc2a0d4448358970a77cebcd6c73e890cd02976f7a6bc98fc49a1d86"} Jan 04 00:12:27 crc kubenswrapper[5108]: E0104 00:12:27.398987 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:27.898958389 +0000 UTC m=+121.887523475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.398840 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.399625 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:27 crc kubenswrapper[5108]: E0104 00:12:27.400029 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:27.900021048 +0000 UTC m=+121.888586124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.406681 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-pn9xb" event={"ID":"12382f58-cdec-4d79-abf7-f9281092d8f0","Type":"ContainerStarted","Data":"9cdc196b3b6e1277e66f037d8d6ff6c388a5874a45e58708acfa27c578696fd8"} Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.417359 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-2vn7s" podStartSLOduration=98.417339618 podStartE2EDuration="1m38.417339618s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:27.416003632 +0000 UTC m=+121.404568728" watchObservedRunningTime="2026-01-04 00:12:27.417339618 +0000 UTC m=+121.405904704" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.434003 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" event={"ID":"df9ddf01-bee2-4ba3-bba8-a6038b624504","Type":"ContainerStarted","Data":"c8b8b524006fa785d80ecdad5d32f7418513d8a562fb03e2abff3dba46d26e03"} Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.434094 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.448164 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.448410 5108 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-8qhfw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.448667 5108 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-5mch2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.448707 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" podUID="df9ddf01-bee2-4ba3-bba8-a6038b624504" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.448779 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" podUID="c76448af-1e86-4765-83a0-7d9cd39bd5a6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.457385 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-wl97g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.457461 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-wl97g" podUID="fe36c33b-eeaa-4b44-9ccd-d44131ccebce" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.457386 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.457540 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.469090 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-tptrl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.469825 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.479380 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-zgssx" podStartSLOduration=99.479346764 podStartE2EDuration="1m39.479346764s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:27.462104315 +0000 UTC m=+121.450669421" watchObservedRunningTime="2026-01-04 00:12:27.479346764 +0000 UTC m=+121.467911850" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.493820 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.505485 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.507446 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:27 crc kubenswrapper[5108]: E0104 00:12:27.510693 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:28.010669806 +0000 UTC m=+121.999234882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.584684 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.611511 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:27 crc kubenswrapper[5108]: E0104 00:12:27.611971 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:28.111941889 +0000 UTC m=+122.100506975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.666714 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" podStartSLOduration=98.666668786 podStartE2EDuration="1m38.666668786s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:27.651824222 +0000 UTC m=+121.640389308" watchObservedRunningTime="2026-01-04 00:12:27.666668786 +0000 UTC m=+121.655233872" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.666998 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-978t5" podStartSLOduration=98.666992035 podStartE2EDuration="1m38.666992035s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:27.585081849 +0000 UTC m=+121.573646945" watchObservedRunningTime="2026-01-04 00:12:27.666992035 +0000 UTC m=+121.655557121" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.733113 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:27 crc kubenswrapper[5108]: E0104 00:12:27.733667 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:28.233638207 +0000 UTC m=+122.222203293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.775512 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:27 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:27 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:27 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.776051 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:27 crc kubenswrapper[5108]: I0104 00:12:27.835833 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:27 crc kubenswrapper[5108]: E0104 00:12:27.838043 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:28.338022114 +0000 UTC m=+122.326587200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:27.991432 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:27.991932 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:27.993809 5108 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-h5ft9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:27.993867 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" podUID="47d021a5-d9a4-4860-9edd-02555049f552" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.039263 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.039857 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:28.539816602 +0000 UTC m=+122.528381688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.042814 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" podStartSLOduration=99.042781252 podStartE2EDuration="1m39.042781252s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:28.040295894 +0000 UTC m=+122.028860990" watchObservedRunningTime="2026-01-04 00:12:28.042781252 +0000 UTC m=+122.031346338" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.167822 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.168370 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:28.668348935 +0000 UTC m=+122.656914021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.177524 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" podStartSLOduration=99.177499855 podStartE2EDuration="1m39.177499855s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:28.0857484 +0000 UTC m=+122.074313496" watchObservedRunningTime="2026-01-04 00:12:28.177499855 +0000 UTC m=+122.166064941" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.178585 5108 ???:1] "http: TLS handshake error from 192.168.126.11:49502: no serving certificate available for the kubelet" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.273261 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.273979 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:28.773952687 +0000 UTC m=+122.762517773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.305321 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvq52"] Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.379007 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.379535 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:28.879516586 +0000 UTC m=+122.868081672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.395919 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gdrn8" podStartSLOduration=11.395890882 podStartE2EDuration="11.395890882s" podCreationTimestamp="2026-01-04 00:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:28.39398184 +0000 UTC m=+122.382546946" watchObservedRunningTime="2026-01-04 00:12:28.395890882 +0000 UTC m=+122.384455988" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.482349 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.482806 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:28.982775034 +0000 UTC m=+122.971340120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.584751 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.585266 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:29.08524891 +0000 UTC m=+123.073813996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.597746 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" event={"ID":"103b9ed4-5d88-445c-9c56-e7144fcbb923","Type":"ContainerStarted","Data":"b3ffd552eac6c563f4194034113ae4624606e3785251d2c9ebd375f1be854af4"} Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.601715 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" event={"ID":"11fdfb17-4544-4c6a-b985-22de45dfaf04","Type":"ContainerStarted","Data":"c1cbaf8472370cf4040baf99dae68e8f0c5b4241b9a509b711ec30117a6b6d4a"} Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.606227 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" event={"ID":"8de3007f-731e-4a3e-84ba-c6a1fcbb8641","Type":"ContainerStarted","Data":"d40bb39c76a90cd586ac5f697cb27f5582eb2c27dec9fd1094cfea2ca2ea9236"} Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.606293 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" event={"ID":"8de3007f-731e-4a3e-84ba-c6a1fcbb8641","Type":"ContainerStarted","Data":"195a7fd53e444630e6179a152969d3403ec15e855853596a8a908e71d09df4ae"} Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.611565 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" event={"ID":"6a476be9-e3a0-47e4-ab8f-29a4601a9134","Type":"ContainerStarted","Data":"8b90a554f8028a6275defe75236080aff16f4aec61f118ceb67e3d8d8cea7f68"} Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.642935 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fsqx9" event={"ID":"40ae343c-e956-4351-bcd6-311eeef3976c","Type":"ContainerStarted","Data":"037db8f5bc0db3ab4b9d34be43dfb27154feea58996d4e90c71f06b9e0b397b2"} Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.655127 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.686849 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.690539 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:29.190513211 +0000 UTC m=+123.179078297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.690925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.698422 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:29.198404856 +0000 UTC m=+123.186969932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.699740 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.712225 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" event={"ID":"3e38c1fa-0767-4ade-86be-f890237f9c94","Type":"ContainerStarted","Data":"aff933feb027703b10277d1714f939a5ea6f75af68fbadf998a27b04019a88aa"} Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.745280 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" event={"ID":"aa56c23c-aae4-4b37-a657-9622fa143fa6","Type":"ContainerStarted","Data":"df9814f829142b0d8456f627b41bf883d8815c16d71b24dd80b1391f62a2abdd"} Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.751854 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.751937 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.751943 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-tptrl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.752001 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.764439 5108 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-8vtr8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.764533 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" podUID="bef2c9f7-de7a-4b8b-a712-36bb05ee31e0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.771537 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7bpfz" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.772061 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8qhfw" Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.808777 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.810409 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:29.31037919 +0000 UTC m=+123.298944276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:28 crc kubenswrapper[5108]: I0104 00:12:28.912029 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:28 crc kubenswrapper[5108]: E0104 00:12:28.922193 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:29.42216539 +0000 UTC m=+123.410730476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:28.977942 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:29 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:29 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:29 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:28.978057 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.173591 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:29 crc kubenswrapper[5108]: E0104 00:12:29.173783 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:29.673747539 +0000 UTC m=+123.662312625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.174253 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:29 crc kubenswrapper[5108]: E0104 00:12:29.174707 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:29.674690234 +0000 UTC m=+123.663255320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.330702 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:29 crc kubenswrapper[5108]: E0104 00:12:29.331063 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:29.831037075 +0000 UTC m=+123.819602161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.449082 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:29 crc kubenswrapper[5108]: E0104 00:12:29.449807 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:29.949763213 +0000 UTC m=+123.938328299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.470890 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-wmv7m" podStartSLOduration=101.460146425 podStartE2EDuration="1m41.460146425s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:29.458029567 +0000 UTC m=+123.446594653" watchObservedRunningTime="2026-01-04 00:12:29.460146425 +0000 UTC m=+123.448711511" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.534997 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-tvrx6" podStartSLOduration=101.534970799 podStartE2EDuration="1m41.534970799s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:29.532351939 +0000 UTC m=+123.520917155" watchObservedRunningTime="2026-01-04 00:12:29.534970799 +0000 UTC m=+123.523535885" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.557129 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:29 crc kubenswrapper[5108]: E0104 00:12:29.557469 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.05744437 +0000 UTC m=+124.046009456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.639832 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fsqx9" podStartSLOduration=13.639777578 podStartE2EDuration="13.639777578s" podCreationTimestamp="2026-01-04 00:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:29.636433938 +0000 UTC m=+123.624999024" watchObservedRunningTime="2026-01-04 00:12:29.639777578 +0000 UTC m=+123.628342664" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.658524 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:29 crc kubenswrapper[5108]: E0104 00:12:29.659029 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.159011732 +0000 UTC m=+124.147576818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.698393 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:29 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:29 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:29 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.698491 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.699779 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.699844 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.723595 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-v9rxg" podStartSLOduration=101.723576057 podStartE2EDuration="1m41.723576057s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:29.72258945 +0000 UTC m=+123.711154556" watchObservedRunningTime="2026-01-04 00:12:29.723576057 +0000 UTC m=+123.712141143" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.724273 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-42gmr" podStartSLOduration=100.724266746 podStartE2EDuration="1m40.724266746s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:29.692850841 +0000 UTC m=+123.681415937" watchObservedRunningTime="2026-01-04 00:12:29.724266746 +0000 UTC m=+123.712831822" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.747911 5108 patch_prober.go:28] interesting pod/console-operator-67c89758df-wl97g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": context deadline exceeded" start-of-body= Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.747966 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-wl97g" podUID="fe36c33b-eeaa-4b44-9ccd-d44131ccebce" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": context deadline exceeded" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.756276 5108 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-5mch2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": context deadline exceeded" start-of-body= Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.756351 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" podUID="df9ddf01-bee2-4ba3-bba8-a6038b624504" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": context deadline exceeded" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.763631 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:29 crc kubenswrapper[5108]: E0104 00:12:29.764424 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.264396336 +0000 UTC m=+124.252961422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.789159 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" event={"ID":"aa56c23c-aae4-4b37-a657-9622fa143fa6","Type":"ContainerStarted","Data":"69cc60bc72acca066c5749ae184717194a46858bab7cc720277916a262ece1e3"} Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.790167 5108 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-8vtr8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.790230 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" podUID="bef2c9f7-de7a-4b8b-a712-36bb05ee31e0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.792732 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" podUID="14a3d6fe-b87f-473d-b105-d2cf34343253" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" gracePeriod=30 Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.793564 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.824488 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.824859 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.830957 5108 patch_prober.go:28] interesting pod/console-64d44f6ddf-shks7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.831087 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-shks7" podUID="149cc7c1-09e7-4088-8c9c-b42e4ea2b604" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.869714 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:29 crc kubenswrapper[5108]: E0104 00:12:29.870868 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.37085147 +0000 UTC m=+124.359416556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.911499 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" podStartSLOduration=100.911014013 podStartE2EDuration="1m40.911014013s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:29.903903499 +0000 UTC m=+123.892468585" watchObservedRunningTime="2026-01-04 00:12:29.911014013 +0000 UTC m=+123.899579099" Jan 04 00:12:29 crc kubenswrapper[5108]: I0104 00:12:29.976531 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:29 crc kubenswrapper[5108]: E0104 00:12:29.977032 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.477007577 +0000 UTC m=+124.465572663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.004829 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.017249 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.030748 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.035961 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.078901 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e709361-d053-4f53-b853-aede95948b7b-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"5e709361-d053-4f53-b853-aede95948b7b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.079099 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e709361-d053-4f53-b853-aede95948b7b-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"5e709361-d053-4f53-b853-aede95948b7b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.079174 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.079499 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 04 00:12:30 crc kubenswrapper[5108]: E0104 00:12:30.079875 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.579835593 +0000 UTC m=+124.568400869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.184755 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.185128 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e709361-d053-4f53-b853-aede95948b7b-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"5e709361-d053-4f53-b853-aede95948b7b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.185239 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e709361-d053-4f53-b853-aede95948b7b-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"5e709361-d053-4f53-b853-aede95948b7b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:30 crc kubenswrapper[5108]: E0104 00:12:30.185509 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.685458894 +0000 UTC m=+124.674023980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.185637 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e709361-d053-4f53-b853-aede95948b7b-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"5e709361-d053-4f53-b853-aede95948b7b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.264267 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e709361-d053-4f53-b853-aede95948b7b-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"5e709361-d053-4f53-b853-aede95948b7b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.287895 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:30 crc kubenswrapper[5108]: E0104 00:12:30.293657 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.793630054 +0000 UTC m=+124.782195140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.348854 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.393171 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:30 crc kubenswrapper[5108]: E0104 00:12:30.393599 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.893566932 +0000 UTC m=+124.882132018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.497131 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:30 crc kubenswrapper[5108]: E0104 00:12:30.497622 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:30.99760233 +0000 UTC m=+124.986167416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.620494 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:30 crc kubenswrapper[5108]: E0104 00:12:30.620879 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:31.12084467 +0000 UTC m=+125.109409756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.622392 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mlfqf"] Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.694647 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:30 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:30 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:30 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.695189 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.722409 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:30 crc kubenswrapper[5108]: E0104 00:12:30.723100 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:31.22307432 +0000 UTC m=+125.211639406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.807773 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-tptrl container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.808264 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.827909 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:30 crc kubenswrapper[5108]: E0104 00:12:30.828747 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:31.328720812 +0000 UTC m=+125.317285898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.906481 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" event={"ID":"103b9ed4-5d88-445c-9c56-e7144fcbb923","Type":"ContainerStarted","Data":"b09762fd5bfd8409a0728cb7e46fc6b991d24aacf0ef178242cf514fb03422d8"} Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.919350 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mlfqf" event={"ID":"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5","Type":"ContainerStarted","Data":"83b82855c5e69b0c97bfc5370fb85ffd789fff9f1774e827574ea83e6e2bbae6"} Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.925715 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"7a582f45625c036ed09db11e363f07459efaae1d29c2fb0da7b3dc3c63f7f158"} Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.927053 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"19729e46bf185d6e301a4c1aa2e759f8b0bfcaf7a161ba699951a150b9114cdd"} Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.928762 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"06acc0fdab64ae0c03024234baa11058369f8c95ce22cf5c0669bf6f2a6df553"} Jan 04 00:12:30 crc kubenswrapper[5108]: I0104 00:12:30.929991 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:30 crc kubenswrapper[5108]: E0104 00:12:30.930553 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:31.4305364 +0000 UTC m=+125.419101486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.008961 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-rsjsp" podStartSLOduration=102.008932701 podStartE2EDuration="1m42.008932701s" podCreationTimestamp="2026-01-04 00:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:31.005094367 +0000 UTC m=+124.993659453" watchObservedRunningTime="2026-01-04 00:12:31.008932701 +0000 UTC m=+124.997497787" Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.032637 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.033237 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:31.53318172 +0000 UTC m=+125.521746806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.136244 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.136682 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:31.636663254 +0000 UTC m=+125.625228340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.242557 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.242862 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:31.74283209 +0000 UTC m=+125.731397176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.347268 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.348157 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:31.848139243 +0000 UTC m=+125.836704329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.450433 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.450780 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:31.950758893 +0000 UTC m=+125.939323979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.552702 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.553139 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:32.053122046 +0000 UTC m=+126.041687132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.654573 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.655052 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:32.155026967 +0000 UTC m=+126.143592053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.723354 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:31 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:31 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:31 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.723893 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.758627 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.759133 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:32.259099166 +0000 UTC m=+126.247664252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.859764 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.860162 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:32.360139222 +0000 UTC m=+126.348704308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.960981 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.961569 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:31 crc kubenswrapper[5108]: E0104 00:12:31.962004 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:32.461983361 +0000 UTC m=+126.450548447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:31 crc kubenswrapper[5108]: I0104 00:12:31.971306 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mlfqf" event={"ID":"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5","Type":"ContainerStarted","Data":"54da9e63bf83e42f6c9805de12aafbafa67afc1d8c0f4866afa83030cf3d24ca"} Jan 04 00:12:31 crc kubenswrapper[5108]: W0104 00:12:31.992051 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5e709361_d053_4f53_b853_aede95948b7b.slice/crio-27df4442130cad9fc62f2475f7d17b71a4cd4a2092bc6ea59d1c76a1f1150d98 WatchSource:0}: Error finding container 27df4442130cad9fc62f2475f7d17b71a4cd4a2092bc6ea59d1c76a1f1150d98: Status 404 returned error can't find the container with id 27df4442130cad9fc62f2475f7d17b71a4cd4a2092bc6ea59d1c76a1f1150d98 Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.009875 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"823ba6f4407d133c03100d327f357f7c3a5ccb754df2af3c3182669cb7c807ed"} Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.044665 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"b4033a6f6856d2c8492390fc627b780e7ed0c32ce602f6a1e56f771715869c1e"} Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.063026 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.063983 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:32.563934823 +0000 UTC m=+126.552499909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.086781 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"64027d195d283ccc3510ab88f07707b3a2eeb1c1a38a1a5b5d0744c85d35069e"} Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.087590 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.194341 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.196625 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:32.69660651 +0000 UTC m=+126.685171596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.245351 5108 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-5mch2 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.245946 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" podUID="df9ddf01-bee2-4ba3-bba8-a6038b624504" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.300473 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.301001 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:32.800976207 +0000 UTC m=+126.789541283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.402189 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.402703 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:32.902674323 +0000 UTC m=+126.891239569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.504964 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.505251 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.005177319 +0000 UTC m=+126.993742415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.505862 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.506369 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.006351671 +0000 UTC m=+126.994916757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.537469 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ff989"] Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.601554 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ff989"] Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.601831 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.607862 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.608141 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.108117397 +0000 UTC m=+127.096682483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.616888 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.672089 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9px8h"] Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.693833 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9px8h"] Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.694137 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.702434 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:32 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:32 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:32 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.702509 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.703023 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.718896 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.718961 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dng69\" (UniqueName: \"kubernetes.io/projected/320a6eb9-3704-43c9-84b9-25580545ff50-kube-api-access-dng69\") pod \"community-operators-ff989\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.719022 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-utilities\") pod \"community-operators-ff989\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.719065 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-catalog-content\") pod \"community-operators-ff989\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.719518 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.219498826 +0000 UTC m=+127.208063912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.830928 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.831394 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-catalog-content\") pod \"certified-operators-9px8h\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.831432 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-utilities\") pod \"certified-operators-9px8h\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.831461 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dng69\" (UniqueName: \"kubernetes.io/projected/320a6eb9-3704-43c9-84b9-25580545ff50-kube-api-access-dng69\") pod \"community-operators-ff989\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.831503 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-utilities\") pod \"community-operators-ff989\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.831525 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-catalog-content\") pod \"community-operators-ff989\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.831543 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfbbh\" (UniqueName: \"kubernetes.io/projected/a762f8cf-a77d-477e-8141-1bb1e02d8744-kube-api-access-cfbbh\") pod \"certified-operators-9px8h\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.831689 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.331665165 +0000 UTC m=+127.320230251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.836595 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-catalog-content\") pod \"community-operators-ff989\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.839444 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-utilities\") pod \"community-operators-ff989\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.851490 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5n9gg"] Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.858624 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.887980 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5n9gg"] Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.897839 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dng69\" (UniqueName: \"kubernetes.io/projected/320a6eb9-3704-43c9-84b9-25580545ff50-kube-api-access-dng69\") pod \"community-operators-ff989\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.939278 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cfbbh\" (UniqueName: \"kubernetes.io/projected/a762f8cf-a77d-477e-8141-1bb1e02d8744-kube-api-access-cfbbh\") pod \"certified-operators-9px8h\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.939357 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-catalog-content\") pod \"certified-operators-9px8h\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.939391 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-utilities\") pod \"certified-operators-9px8h\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.939424 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:32 crc kubenswrapper[5108]: E0104 00:12:32.939811 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.439794035 +0000 UTC m=+127.428359121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.940816 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-catalog-content\") pod \"certified-operators-9px8h\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:32 crc kubenswrapper[5108]: I0104 00:12:32.944378 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-utilities\") pod \"certified-operators-9px8h\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.005809 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff989" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.021270 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfbbh\" (UniqueName: \"kubernetes.io/projected/a762f8cf-a77d-477e-8141-1bb1e02d8744-kube-api-access-cfbbh\") pod \"certified-operators-9px8h\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.042946 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.043381 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-utilities\") pod \"community-operators-5n9gg\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.043507 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-catalog-content\") pod \"community-operators-5n9gg\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.043540 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnbk\" (UniqueName: \"kubernetes.io/projected/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-kube-api-access-2cnbk\") pod \"community-operators-5n9gg\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.043737 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.543706289 +0000 UTC m=+127.532271375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.065460 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.127258 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zs7zk"] Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.139265 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.159653 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.159710 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-utilities\") pod \"certified-operators-zs7zk\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.159780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-catalog-content\") pod \"community-operators-5n9gg\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.159801 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnbk\" (UniqueName: \"kubernetes.io/projected/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-kube-api-access-2cnbk\") pod \"community-operators-5n9gg\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.159830 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmr9w\" (UniqueName: \"kubernetes.io/projected/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-kube-api-access-gmr9w\") pod \"certified-operators-zs7zk\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.159865 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-utilities\") pod \"community-operators-5n9gg\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.159884 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-catalog-content\") pod \"certified-operators-zs7zk\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.160272 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.660256759 +0000 UTC m=+127.648821835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.160801 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-catalog-content\") pod \"community-operators-5n9gg\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.161558 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-utilities\") pod \"community-operators-5n9gg\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.212633 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mlfqf" event={"ID":"6feab616-6edc-4a90-8ee9-f5ae1c2e80c5","Type":"ContainerStarted","Data":"ece936bb10d99e8d643fd0cdd3fceb356225fcff5ab291c40872d8517f4dffed"} Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.251824 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zs7zk"] Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.261328 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.261556 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gmr9w\" (UniqueName: \"kubernetes.io/projected/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-kube-api-access-gmr9w\") pod \"certified-operators-zs7zk\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.261605 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-catalog-content\") pod \"certified-operators-zs7zk\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.261629 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-utilities\") pod \"certified-operators-zs7zk\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.262138 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-utilities\") pod \"certified-operators-zs7zk\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.262605 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.76257977 +0000 UTC m=+127.751144856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.263215 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-catalog-content\") pod \"certified-operators-zs7zk\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.264114 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5e709361-d053-4f53-b853-aede95948b7b","Type":"ContainerStarted","Data":"27df4442130cad9fc62f2475f7d17b71a4cd4a2092bc6ea59d1c76a1f1150d98"} Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.343582 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnbk\" (UniqueName: \"kubernetes.io/projected/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-kube-api-access-2cnbk\") pod \"community-operators-5n9gg\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.363386 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.364350 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.864330956 +0000 UTC m=+127.852896042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.397540 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmr9w\" (UniqueName: \"kubernetes.io/projected/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-kube-api-access-gmr9w\") pod \"certified-operators-zs7zk\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.468479 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.469573 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:33.969549008 +0000 UTC m=+127.958114094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.552366 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.556334 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.570485 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.570809 5108 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-h5ft9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]log ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]etcd ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/generic-apiserver-start-informers ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/max-in-flight-filter ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 04 00:12:33 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 04 00:12:33 crc kubenswrapper[5108]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectcache ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-startinformers ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 04 00:12:33 crc kubenswrapper[5108]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 04 00:12:33 crc kubenswrapper[5108]: livez check failed Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.570865 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" podUID="47d021a5-d9a4-4860-9edd-02555049f552" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.571463 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.071444917 +0000 UTC m=+128.060010003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.671416 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.671805 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.171776465 +0000 UTC m=+128.160341551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.724083 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:33 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:33 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:33 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.724172 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.786047 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.786725 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.286706809 +0000 UTC m=+128.275271895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.889180 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.889373 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.389335509 +0000 UTC m=+128.377900595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.889846 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.890718 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.390707466 +0000 UTC m=+128.379272552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.897241 5108 ???:1] "http: TLS handshake error from 192.168.126.11:36064: no serving certificate available for the kubelet" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.900451 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mlfqf" podStartSLOduration=105.90041514 podStartE2EDuration="1m45.90041514s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:33.869848809 +0000 UTC m=+127.858413905" watchObservedRunningTime="2026-01-04 00:12:33.90041514 +0000 UTC m=+127.888980226" Jan 04 00:12:33 crc kubenswrapper[5108]: I0104 00:12:33.991390 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:33 crc kubenswrapper[5108]: E0104 00:12:33.991809 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.491783755 +0000 UTC m=+128.480348841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.094469 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.094961 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.594941579 +0000 UTC m=+128.583506665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.195639 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.195995 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.695970455 +0000 UTC m=+128.684535541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.297224 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.297801 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.797783154 +0000 UTC m=+128.786348240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.398454 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.399034 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:34.899002925 +0000 UTC m=+128.887568011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.505845 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.507166 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.00694896 +0000 UTC m=+128.995514046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.585223 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5e709361-d053-4f53-b853-aede95948b7b","Type":"ContainerStarted","Data":"d39f38d61cb9a1ff63d34ab9a2548810d4c68aff11c1341eb760a22281b556f0"} Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.610051 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.610518 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.110484645 +0000 UTC m=+129.099049731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.610662 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.612280 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.112271324 +0000 UTC m=+129.100836410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.627100 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=5.627070306 podStartE2EDuration="5.627070306s" podCreationTimestamp="2026-01-04 00:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:34.625362829 +0000 UTC m=+128.613927925" watchObservedRunningTime="2026-01-04 00:12:34.627070306 +0000 UTC m=+128.615635392" Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.638951 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9px8h"] Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.693219 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:34 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:34 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:34 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.693342 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.712047 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.712495 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.212468017 +0000 UTC m=+129.201033103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.777025 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ff989"] Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.814098 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.814756 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.314727217 +0000 UTC m=+129.303292483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:34 crc kubenswrapper[5108]: I0104 00:12:34.916828 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:34 crc kubenswrapper[5108]: E0104 00:12:34.917185 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.417162002 +0000 UTC m=+129.405727088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.019416 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.020030 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.520005938 +0000 UTC m=+129.508571024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.049342 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-28926"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.120187 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28926"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.120638 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.121087 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.121567 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.621544468 +0000 UTC m=+129.610109554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.130351 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.148560 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.165945 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.205485 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.205605 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" podUID="14a3d6fe-b87f-473d-b105-d2cf34343253" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.223774 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.223874 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6pj5\" (UniqueName: \"kubernetes.io/projected/59b92be9-237e-4252-9bbe-a71908afb6e9-kube-api-access-v6pj5\") pod \"redhat-marketplace-28926\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.223899 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-utilities\") pod \"redhat-marketplace-28926\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.223941 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-catalog-content\") pod \"redhat-marketplace-28926\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.224358 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.724339224 +0000 UTC m=+129.712904310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.265375 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zs7zk"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.325031 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.325451 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-catalog-content\") pod \"redhat-marketplace-28926\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.325545 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6pj5\" (UniqueName: \"kubernetes.io/projected/59b92be9-237e-4252-9bbe-a71908afb6e9-kube-api-access-v6pj5\") pod \"redhat-marketplace-28926\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.325576 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-utilities\") pod \"redhat-marketplace-28926\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.325771 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.825720149 +0000 UTC m=+129.814285235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.326031 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-utilities\") pod \"redhat-marketplace-28926\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.326484 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-catalog-content\") pod \"redhat-marketplace-28926\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.379647 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6pj5\" (UniqueName: \"kubernetes.io/projected/59b92be9-237e-4252-9bbe-a71908afb6e9-kube-api-access-v6pj5\") pod \"redhat-marketplace-28926\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.427886 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.428704 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:35.928661118 +0000 UTC m=+129.917226204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.438333 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wpxsz"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.477358 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.524902 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpxsz"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.525130 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.525507 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5n9gg"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.529746 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.530132 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.030108916 +0000 UTC m=+130.018674002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.597133 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.614680 5108 generic.go:358] "Generic (PLEG): container finished" podID="5e709361-d053-4f53-b853-aede95948b7b" containerID="d39f38d61cb9a1ff63d34ab9a2548810d4c68aff11c1341eb760a22281b556f0" exitCode=0 Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.614822 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5e709361-d053-4f53-b853-aede95948b7b","Type":"ContainerDied","Data":"d39f38d61cb9a1ff63d34ab9a2548810d4c68aff11c1341eb760a22281b556f0"} Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.618274 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7zk" event={"ID":"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9","Type":"ContainerStarted","Data":"1dc63a1bd4e0d961f3eacfb051bcc1786949001d74709d7f44dc55c0bd0e6327"} Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.620760 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" event={"ID":"841a53bb-0876-4f9d-b4bf-b01da8e9307b","Type":"ContainerStarted","Data":"e77fa94ce0ddd6d900386d282c111b028f094e1ba63619bb2723d6fd3f8304c8"} Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.622404 5108 generic.go:358] "Generic (PLEG): container finished" podID="320a6eb9-3704-43c9-84b9-25580545ff50" containerID="2498dbcf829a4273cdf43954e9afe8c54f16f260eab393f1c3f171f0dbfd275d" exitCode=0 Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.622495 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff989" event={"ID":"320a6eb9-3704-43c9-84b9-25580545ff50","Type":"ContainerDied","Data":"2498dbcf829a4273cdf43954e9afe8c54f16f260eab393f1c3f171f0dbfd275d"} Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.622514 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff989" event={"ID":"320a6eb9-3704-43c9-84b9-25580545ff50","Type":"ContainerStarted","Data":"a00db4ce7d726050aa2830753c8585884469ef6aaf94809b9e71f1711279436e"} Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.630037 5108 generic.go:358] "Generic (PLEG): container finished" podID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerID="3ae8f7ea05b6e70896de33871a27a0220359dd318d363fce9c4b2dad444454f6" exitCode=0 Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.630230 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9px8h" event={"ID":"a762f8cf-a77d-477e-8141-1bb1e02d8744","Type":"ContainerDied","Data":"3ae8f7ea05b6e70896de33871a27a0220359dd318d363fce9c4b2dad444454f6"} Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.630276 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9px8h" event={"ID":"a762f8cf-a77d-477e-8141-1bb1e02d8744","Type":"ContainerStarted","Data":"31f2336c22471a37bf881fc0d187124f25fdea36778b6c181cd7655b66138e00"} Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.633388 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh47z\" (UniqueName: \"kubernetes.io/projected/d28a78c9-d785-4300-bbfe-580917daaeb7-kube-api-access-gh47z\") pod \"redhat-marketplace-wpxsz\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.633460 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.633530 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-catalog-content\") pod \"redhat-marketplace-wpxsz\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.633558 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-utilities\") pod \"redhat-marketplace-wpxsz\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.633968 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.133948409 +0000 UTC m=+130.122513495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.647266 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-clk26"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.686011 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:35 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:35 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:35 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.686114 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.712833 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-clk26"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.713115 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.735495 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.735727 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.235671835 +0000 UTC m=+130.224236931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.736282 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-catalog-content\") pod \"redhat-operators-clk26\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.736369 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-catalog-content\") pod \"redhat-marketplace-wpxsz\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.736413 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-utilities\") pod \"redhat-marketplace-wpxsz\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.736500 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-utilities\") pod \"redhat-operators-clk26\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.736626 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpx6b\" (UniqueName: \"kubernetes.io/projected/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-kube-api-access-bpx6b\") pod \"redhat-operators-clk26\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.736677 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gh47z\" (UniqueName: \"kubernetes.io/projected/d28a78c9-d785-4300-bbfe-580917daaeb7-kube-api-access-gh47z\") pod \"redhat-marketplace-wpxsz\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.736800 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.737094 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-utilities\") pod \"redhat-marketplace-wpxsz\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.737409 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-catalog-content\") pod \"redhat-marketplace-wpxsz\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.737779 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.237766252 +0000 UTC m=+130.226331528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.743674 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.800533 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh47z\" (UniqueName: \"kubernetes.io/projected/d28a78c9-d785-4300-bbfe-580917daaeb7-kube-api-access-gh47z\") pod \"redhat-marketplace-wpxsz\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.843691 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fsqx9" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.844826 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.845453 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-catalog-content\") pod \"redhat-operators-clk26\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.845515 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-utilities\") pod \"redhat-operators-clk26\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.845576 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bpx6b\" (UniqueName: \"kubernetes.io/projected/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-kube-api-access-bpx6b\") pod \"redhat-operators-clk26\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.847544 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.347508355 +0000 UTC m=+130.336073441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.848271 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-catalog-content\") pod \"redhat-operators-clk26\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.849274 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-utilities\") pod \"redhat-operators-clk26\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.851843 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z5bj8"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.916777 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.925761 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5bj8"] Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.926012 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.936220 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpx6b\" (UniqueName: \"kubernetes.io/projected/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-kube-api-access-bpx6b\") pod \"redhat-operators-clk26\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.942697 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.953490 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-utilities\") pod \"redhat-operators-z5bj8\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.953589 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-catalog-content\") pod \"redhat-operators-z5bj8\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.953619 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl26m\" (UniqueName: \"kubernetes.io/projected/49f3cf98-e60e-4844-b59e-14d18c3d9559-kube-api-access-nl26m\") pod \"redhat-operators-z5bj8\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:35 crc kubenswrapper[5108]: I0104 00:12:35.953647 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:35 crc kubenswrapper[5108]: E0104 00:12:35.955492 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.45547354 +0000 UTC m=+130.444038616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.101501 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.101898 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.601866561 +0000 UTC m=+130.590431837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.102287 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-utilities\") pod \"redhat-operators-z5bj8\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.102379 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-catalog-content\") pod \"redhat-operators-z5bj8\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.102411 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nl26m\" (UniqueName: \"kubernetes.io/projected/49f3cf98-e60e-4844-b59e-14d18c3d9559-kube-api-access-nl26m\") pod \"redhat-operators-z5bj8\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.102439 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.102834 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.602826626 +0000 UTC m=+130.591391712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.104013 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-utilities\") pod \"redhat-operators-z5bj8\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.109580 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-catalog-content\") pod \"redhat-operators-z5bj8\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.149919 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl26m\" (UniqueName: \"kubernetes.io/projected/49f3cf98-e60e-4844-b59e-14d18c3d9559-kube-api-access-nl26m\") pod \"redhat-operators-z5bj8\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.223896 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.224193 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.724168425 +0000 UTC m=+130.712733511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.266393 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.325911 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.326588 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.826565939 +0000 UTC m=+130.815131025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.362239 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28926"] Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.427057 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.427260 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.927216345 +0000 UTC m=+130.915781431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.427737 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.428122 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:36.92810432 +0000 UTC m=+130.916669406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.537388 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.537927 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.037879444 +0000 UTC m=+131.026444530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.640411 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.640904 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.140884635 +0000 UTC m=+131.129449721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.656816 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n9gg" event={"ID":"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b","Type":"ContainerStarted","Data":"2fd33844376409ee6d662a930a1ba8389dbd2d664fdf986a91dd87ae14974966"} Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.660653 5108 generic.go:358] "Generic (PLEG): container finished" podID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerID="cd59c850dc839dd29d57d9034d6abfb51d15df1f1d8f3b54277f3b39fa3c7cb4" exitCode=0 Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.660859 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7zk" event={"ID":"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9","Type":"ContainerDied","Data":"cd59c850dc839dd29d57d9034d6abfb51d15df1f1d8f3b54277f3b39fa3c7cb4"} Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.678421 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28926" event={"ID":"59b92be9-237e-4252-9bbe-a71908afb6e9","Type":"ContainerStarted","Data":"16d1e9a58054623ac50b41cccb3a04588806f536689383f8cfe4b6bdbbe50b36"} Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.707515 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:36 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:36 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:36 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.707623 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.742375 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.742578 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.242542968 +0000 UTC m=+131.231108064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.742925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.743981 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.243969917 +0000 UTC m=+131.232535003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.844330 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.844964 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.344934151 +0000 UTC m=+131.333499237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:36 crc kubenswrapper[5108]: I0104 00:12:36.946598 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:36 crc kubenswrapper[5108]: E0104 00:12:36.947119 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.447097669 +0000 UTC m=+131.435662745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.000028 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-clk26"] Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.048431 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.048752 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.54866168 +0000 UTC m=+131.537226766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.049330 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.050435 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.550425768 +0000 UTC m=+131.538990854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.089507 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpxsz"] Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.154401 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.154539 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.654505758 +0000 UTC m=+131.643070844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.155278 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.155779 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.655767182 +0000 UTC m=+131.644332268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.172423 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5bj8"] Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.238545 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.258640 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.258854 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.758814364 +0000 UTC m=+131.747379450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.259322 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.259997 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.759971555 +0000 UTC m=+131.748536641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.361317 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e709361-d053-4f53-b853-aede95948b7b-kube-api-access\") pod \"5e709361-d053-4f53-b853-aede95948b7b\" (UID: \"5e709361-d053-4f53-b853-aede95948b7b\") " Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.362101 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.362281 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.862251786 +0000 UTC m=+131.850816882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.362663 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e709361-d053-4f53-b853-aede95948b7b-kubelet-dir\") pod \"5e709361-d053-4f53-b853-aede95948b7b\" (UID: \"5e709361-d053-4f53-b853-aede95948b7b\") " Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.362756 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e709361-d053-4f53-b853-aede95948b7b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e709361-d053-4f53-b853-aede95948b7b" (UID: "5e709361-d053-4f53-b853-aede95948b7b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.363118 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.363846 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.863833919 +0000 UTC m=+131.852399015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.366988 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e709361-d053-4f53-b853-aede95948b7b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.376238 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e709361-d053-4f53-b853-aede95948b7b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e709361-d053-4f53-b853-aede95948b7b" (UID: "5e709361-d053-4f53-b853-aede95948b7b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.468167 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.468448 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.968399042 +0000 UTC m=+131.956964128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.469238 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.469358 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e709361-d053-4f53-b853-aede95948b7b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.470571 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:37.97055541 +0000 UTC m=+131.959120496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.570567 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.570895 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.070833547 +0000 UTC m=+132.059398623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.571720 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.572500 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.072479651 +0000 UTC m=+132.061044727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.673821 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.674232 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.174186157 +0000 UTC m=+132.162751243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.684085 5108 generic.go:358] "Generic (PLEG): container finished" podID="2a0c6ba9-a7b4-42c9-8121-790c1d9cb024" containerID="02f9428cfbceb44b1090900540c7f7935bafef78343d58da19cfa46d929845b6" exitCode=0 Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.684186 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" event={"ID":"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024","Type":"ContainerDied","Data":"02f9428cfbceb44b1090900540c7f7935bafef78343d58da19cfa46d929845b6"} Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.686959 5108 generic.go:358] "Generic (PLEG): container finished" podID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerID="d09590af24f0083c4ffcfd8bbf55561836823b57f2d135fe7982b8a97fab80bf" exitCode=0 Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.687126 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpxsz" event={"ID":"d28a78c9-d785-4300-bbfe-580917daaeb7","Type":"ContainerDied","Data":"d09590af24f0083c4ffcfd8bbf55561836823b57f2d135fe7982b8a97fab80bf"} Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.687162 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpxsz" event={"ID":"d28a78c9-d785-4300-bbfe-580917daaeb7","Type":"ContainerStarted","Data":"40397994e2beed64d7866dab282b8180765f569e76fefd07df1c5ea460d229cb"} Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.692176 5108 generic.go:358] "Generic (PLEG): container finished" podID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerID="5f95447117aeeab24fb218fce592fd66dfc3b716535923c5ab957c9fa7f1b5db" exitCode=0 Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.692392 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n9gg" event={"ID":"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b","Type":"ContainerDied","Data":"5f95447117aeeab24fb218fce592fd66dfc3b716535923c5ab957c9fa7f1b5db"} Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.695587 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5e709361-d053-4f53-b853-aede95948b7b","Type":"ContainerDied","Data":"27df4442130cad9fc62f2475f7d17b71a4cd4a2092bc6ea59d1c76a1f1150d98"} Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.695629 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27df4442130cad9fc62f2475f7d17b71a4cd4a2092bc6ea59d1c76a1f1150d98" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.695758 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.696599 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:37 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:37 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:37 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.696699 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.699716 5108 generic.go:358] "Generic (PLEG): container finished" podID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerID="5ba6dc9847b2151ddb21ea830f1493f817217bf71ad71bc20c223d01fdb83e06" exitCode=0 Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.699821 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28926" event={"ID":"59b92be9-237e-4252-9bbe-a71908afb6e9","Type":"ContainerDied","Data":"5ba6dc9847b2151ddb21ea830f1493f817217bf71ad71bc20c223d01fdb83e06"} Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.704777 5108 generic.go:358] "Generic (PLEG): container finished" podID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerID="4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696" exitCode=0 Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.704950 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5bj8" event={"ID":"49f3cf98-e60e-4844-b59e-14d18c3d9559","Type":"ContainerDied","Data":"4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696"} Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.704985 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5bj8" event={"ID":"49f3cf98-e60e-4844-b59e-14d18c3d9559","Type":"ContainerStarted","Data":"be2f9529fbf894f542845855bfe1e25aa7a53fb2f868e6506c7fa637d2b61d82"} Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.707517 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clk26" event={"ID":"1aa34c52-ea52-42e1-a7b1-a6f22e32642b","Type":"ContainerStarted","Data":"4b5f861601d1bb512fd16d58e941f6fd63c1a48559fe02019b46cea0f2bed4a6"} Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.735068 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.736572 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e709361-d053-4f53-b853-aede95948b7b" containerName="pruner" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.736596 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e709361-d053-4f53-b853-aede95948b7b" containerName="pruner" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.736715 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e709361-d053-4f53-b853-aede95948b7b" containerName="pruner" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.751758 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.752009 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.760102 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.770889 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.778745 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.779466 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.279448854 +0000 UTC m=+132.268013940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.880652 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.880874 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.38083179 +0000 UTC m=+132.369396876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.882687 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3561a689-d524-495e-bd7f-81241339cfef-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"3561a689-d524-495e-bd7f-81241339cfef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.882756 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.882822 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3561a689-d524-495e-bd7f-81241339cfef-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"3561a689-d524-495e-bd7f-81241339cfef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:37 crc kubenswrapper[5108]: E0104 00:12:37.883211 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.383178224 +0000 UTC m=+132.371743310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.992391 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.996922 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:37 crc kubenswrapper[5108]: I0104 00:12:37.998588 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-h5ft9" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.001633 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3561a689-d524-495e-bd7f-81241339cfef-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"3561a689-d524-495e-bd7f-81241339cfef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.001783 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3561a689-d524-495e-bd7f-81241339cfef-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"3561a689-d524-495e-bd7f-81241339cfef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.002331 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3561a689-d524-495e-bd7f-81241339cfef-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"3561a689-d524-495e-bd7f-81241339cfef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.002626 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.502592796 +0000 UTC m=+132.491157892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.029124 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3561a689-d524-495e-bd7f-81241339cfef-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"3561a689-d524-495e-bd7f-81241339cfef\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.095419 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.103672 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.104552 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.604529178 +0000 UTC m=+132.593094444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.207679 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.208049 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.708019531 +0000 UTC m=+132.696584617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.310047 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.310623 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.81060157 +0000 UTC m=+132.799166656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.412753 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.413148 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:38.913118036 +0000 UTC m=+132.901683122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.516624 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.517059 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.017028462 +0000 UTC m=+133.005593548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.588144 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.619273 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.620017 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.11998496 +0000 UTC m=+133.108550046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.706505 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:38 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:38 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:38 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.707218 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.724417 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.724910 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.224892632 +0000 UTC m=+133.213457708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.757985 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.758066 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.761020 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-wl97g" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.768482 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-5mch2" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.783753 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.820821 5108 generic.go:358] "Generic (PLEG): container finished" podID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerID="53e38fcca4e479daf86ba5adabe7a76543efdb7feffc17bde30b53d3dcd9c0f1" exitCode=0 Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.821486 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clk26" event={"ID":"1aa34c52-ea52-42e1-a7b1-a6f22e32642b","Type":"ContainerDied","Data":"53e38fcca4e479daf86ba5adabe7a76543efdb7feffc17bde30b53d3dcd9c0f1"} Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.827551 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.829126 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.329101664 +0000 UTC m=+133.317666750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.831538 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"3561a689-d524-495e-bd7f-81241339cfef","Type":"ContainerStarted","Data":"043dd1b0565b8bb42650de00de460c1036ff6cb2501f029407cc703c8ca42787"} Jan 04 00:12:38 crc kubenswrapper[5108]: I0104 00:12:38.933481 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:38 crc kubenswrapper[5108]: E0104 00:12:38.939112 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.439086783 +0000 UTC m=+133.427652049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.043251 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.043860 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.5438358 +0000 UTC m=+133.532400886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.145531 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.146363 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.646335407 +0000 UTC m=+133.634900503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.246568 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.246785 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.746727727 +0000 UTC m=+133.735292823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.248269 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.248886 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.748866134 +0000 UTC m=+133.737431230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.351615 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.352676 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.852647435 +0000 UTC m=+133.841212521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.455394 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.455982 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:39.955950303 +0000 UTC m=+133.944515569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.513422 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.556443 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-config-volume\") pod \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.556513 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hscww\" (UniqueName: \"kubernetes.io/projected/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-kube-api-access-hscww\") pod \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.556709 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.556873 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.056844807 +0000 UTC m=+134.045409893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.556971 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-secret-volume\") pod \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\" (UID: \"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024\") " Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.557703 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a0c6ba9-a7b4-42c9-8121-790c1d9cb024" (UID: "2a0c6ba9-a7b4-42c9-8121-790c1d9cb024"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.559845 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.560031 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-config-volume\") on node \"crc\" DevicePath \"\"" Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.560541 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.060518996 +0000 UTC m=+134.049084082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.577272 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-kube-api-access-hscww" (OuterVolumeSpecName: "kube-api-access-hscww") pod "2a0c6ba9-a7b4-42c9-8121-790c1d9cb024" (UID: "2a0c6ba9-a7b4-42c9-8121-790c1d9cb024"). InnerVolumeSpecName "kube-api-access-hscww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.578575 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a0c6ba9-a7b4-42c9-8121-790c1d9cb024" (UID: "2a0c6ba9-a7b4-42c9-8121-790c1d9cb024"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.663467 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.664613 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.164575135 +0000 UTC m=+134.153140211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.664730 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.664874 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hscww\" (UniqueName: \"kubernetes.io/projected/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-kube-api-access-hscww\") on node \"crc\" DevicePath \"\"" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.664886 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.665344 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.165332895 +0000 UTC m=+134.153897981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.698403 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:39 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:39 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:39 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.698558 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.701400 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.701474 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.766649 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.767271 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.267244705 +0000 UTC m=+134.255809791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.796220 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8vtr8" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.823337 5108 patch_prober.go:28] interesting pod/console-64d44f6ddf-shks7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.823402 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-shks7" podUID="149cc7c1-09e7-4088-8c9c-b42e4ea2b604" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.877449 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.879693 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.379674311 +0000 UTC m=+134.368239397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.897312 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" event={"ID":"2a0c6ba9-a7b4-42c9-8121-790c1d9cb024","Type":"ContainerDied","Data":"6a635abb69e6ddbc0c89d227f0c47ebe327d77106eea5c4de78b028852fe6037"} Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.897388 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a635abb69e6ddbc0c89d227f0c47ebe327d77106eea5c4de78b028852fe6037" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.897568 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k" Jan 04 00:12:39 crc kubenswrapper[5108]: I0104 00:12:39.980407 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:39 crc kubenswrapper[5108]: E0104 00:12:39.980784 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.480755619 +0000 UTC m=+134.469320705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.081680 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.082164 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.582146185 +0000 UTC m=+134.570711281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.183063 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.183259 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.683229954 +0000 UTC m=+134.671795040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.183773 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.184267 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.684248791 +0000 UTC m=+134.672813877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.306365 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.306612 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.806567633 +0000 UTC m=+134.795132719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.307833 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.807811727 +0000 UTC m=+134.796376813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.308058 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.392846 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.409627 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.409929 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.909881981 +0000 UTC m=+134.898447077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.410843 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.411372 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:40.91135838 +0000 UTC m=+134.899923476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.512852 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.514610 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.014588687 +0000 UTC m=+135.003153773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.615475 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.615529 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.115505591 +0000 UTC m=+135.104070677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.737508 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:40 crc kubenswrapper[5108]: [-]has-synced failed: reason withheld Jan 04 00:12:40 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:40 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.737616 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.738803 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.739234 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.239189819 +0000 UTC m=+135.227754905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.840749 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.841325 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.341297255 +0000 UTC m=+135.329862341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.917722 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" event={"ID":"841a53bb-0876-4f9d-b4bf-b01da8e9307b","Type":"ContainerStarted","Data":"c97cadacee1d7c39a179de32ee97941322db6e7e335cc3613305cc5790534a18"} Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.920597 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"3561a689-d524-495e-bd7f-81241339cfef","Type":"ContainerStarted","Data":"8c6df64ad864f76aa7f05ebf64476223a9ce6eef79582c4e6d1eb6c1e40d9b6f"} Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.942585 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=3.942561418 podStartE2EDuration="3.942561418s" podCreationTimestamp="2026-01-04 00:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:40.939595398 +0000 UTC m=+134.928160484" watchObservedRunningTime="2026-01-04 00:12:40.942561418 +0000 UTC m=+134.931126514" Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.942904 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:40 crc kubenswrapper[5108]: E0104 00:12:40.943335 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.443295438 +0000 UTC m=+135.431860524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:40 crc kubenswrapper[5108]: I0104 00:12:40.959215 5108 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.047468 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.047993 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.547966343 +0000 UTC m=+135.536531589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.148854 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.149013 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.648983429 +0000 UTC m=+135.637548515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.149381 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.150161 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.650149571 +0000 UTC m=+135.638714657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.250105 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.250676 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.750635413 +0000 UTC m=+135.739200499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.251094 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.251722 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.751700662 +0000 UTC m=+135.740265748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.353799 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.354724 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.854699072 +0000 UTC m=+135.843264158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.457872 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.458423 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:41.95839951 +0000 UTC m=+135.946964596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.560132 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.560265 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:42.060240669 +0000 UTC m=+136.048805755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.560625 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.561077 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:42.061064022 +0000 UTC m=+136.049629108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.675538 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.676066 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-04 00:12:42.176003914 +0000 UTC m=+136.164569000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.689031 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 04 00:12:41 crc kubenswrapper[5108]: [+]has-synced ok Jan 04 00:12:41 crc kubenswrapper[5108]: [+]process-running ok Jan 04 00:12:41 crc kubenswrapper[5108]: healthz check failed Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.689126 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:12:41 crc kubenswrapper[5108]: E0104 00:12:41.779496 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-04 00:12:42.279463177 +0000 UTC m=+136.268028263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nbqsh" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.778986 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.808101 5108 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-04T00:12:40.959266889Z","UUID":"d52b9943-cc61-4709-8dce-6542647c0bef","Handler":null,"Name":"","Endpoint":""} Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.838674 5108 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.838753 5108 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.884602 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.890186 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.951093 5108 generic.go:358] "Generic (PLEG): container finished" podID="3561a689-d524-495e-bd7f-81241339cfef" containerID="8c6df64ad864f76aa7f05ebf64476223a9ce6eef79582c4e6d1eb6c1e40d9b6f" exitCode=0 Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.951289 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"3561a689-d524-495e-bd7f-81241339cfef","Type":"ContainerDied","Data":"8c6df64ad864f76aa7f05ebf64476223a9ce6eef79582c4e6d1eb6c1e40d9b6f"} Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.955030 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" event={"ID":"841a53bb-0876-4f9d-b4bf-b01da8e9307b","Type":"ContainerStarted","Data":"fa8791025c55f0e4094efad7e168bccedcb0d361422a2a7e16971523eadd308e"} Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.987057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.992914 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 04 00:12:41 crc kubenswrapper[5108]: I0104 00:12:41.992993 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:42 crc kubenswrapper[5108]: I0104 00:12:42.068475 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nbqsh\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:42 crc kubenswrapper[5108]: I0104 00:12:42.088132 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 04 00:12:42 crc kubenswrapper[5108]: I0104 00:12:42.091651 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:42 crc kubenswrapper[5108]: I0104 00:12:42.485417 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 04 00:12:42 crc kubenswrapper[5108]: I0104 00:12:42.748918 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:42 crc kubenswrapper[5108]: I0104 00:12:42.758256 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" Jan 04 00:12:42 crc kubenswrapper[5108]: I0104 00:12:42.971973 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nbqsh"] Jan 04 00:12:42 crc kubenswrapper[5108]: I0104 00:12:42.980521 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" event={"ID":"841a53bb-0876-4f9d-b4bf-b01da8e9307b","Type":"ContainerStarted","Data":"22d7eb9eed5e6db992289c4bfd676ce7befecfa8f8c07db2ec5e8a48eda34069"} Jan 04 00:12:43 crc kubenswrapper[5108]: I0104 00:12:43.007170 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5jjj4" podStartSLOduration=27.007143151 podStartE2EDuration="27.007143151s" podCreationTimestamp="2026-01-04 00:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:43.00335515 +0000 UTC m=+136.991920266" watchObservedRunningTime="2026-01-04 00:12:43.007143151 +0000 UTC m=+136.995708237" Jan 04 00:12:44 crc kubenswrapper[5108]: I0104 00:12:44.004782 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" event={"ID":"7c39a999-644f-43cd-b7e6-c7fd14281924","Type":"ContainerStarted","Data":"9abcd8ed4c62866af4e464ee1f9e0ba733b5018b9858170b273192eeea514bfd"} Jan 04 00:12:44 crc kubenswrapper[5108]: I0104 00:12:44.281240 5108 ???:1] "http: TLS handshake error from 192.168.126.11:56888: no serving certificate available for the kubelet" Jan 04 00:12:45 crc kubenswrapper[5108]: E0104 00:12:45.149292 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:12:45 crc kubenswrapper[5108]: E0104 00:12:45.160739 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:12:45 crc kubenswrapper[5108]: E0104 00:12:45.188564 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:12:45 crc kubenswrapper[5108]: E0104 00:12:45.188694 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" podUID="14a3d6fe-b87f-473d-b105-d2cf34343253" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 04 00:12:48 crc kubenswrapper[5108]: I0104 00:12:48.349025 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:48 crc kubenswrapper[5108]: I0104 00:12:48.543648 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3561a689-d524-495e-bd7f-81241339cfef-kube-api-access\") pod \"3561a689-d524-495e-bd7f-81241339cfef\" (UID: \"3561a689-d524-495e-bd7f-81241339cfef\") " Jan 04 00:12:48 crc kubenswrapper[5108]: I0104 00:12:48.545274 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3561a689-d524-495e-bd7f-81241339cfef-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3561a689-d524-495e-bd7f-81241339cfef" (UID: "3561a689-d524-495e-bd7f-81241339cfef"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:12:48 crc kubenswrapper[5108]: I0104 00:12:48.545327 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3561a689-d524-495e-bd7f-81241339cfef-kubelet-dir\") pod \"3561a689-d524-495e-bd7f-81241339cfef\" (UID: \"3561a689-d524-495e-bd7f-81241339cfef\") " Jan 04 00:12:48 crc kubenswrapper[5108]: I0104 00:12:48.545621 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3561a689-d524-495e-bd7f-81241339cfef-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:12:48 crc kubenswrapper[5108]: I0104 00:12:48.676304 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3561a689-d524-495e-bd7f-81241339cfef-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3561a689-d524-495e-bd7f-81241339cfef" (UID: "3561a689-d524-495e-bd7f-81241339cfef"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:12:48 crc kubenswrapper[5108]: I0104 00:12:48.794598 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:48 crc kubenswrapper[5108]: I0104 00:12:48.794695 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:12:48 crc kubenswrapper[5108]: I0104 00:12:48.795299 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3561a689-d524-495e-bd7f-81241339cfef-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.120520 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" event={"ID":"7c39a999-644f-43cd-b7e6-c7fd14281924","Type":"ContainerStarted","Data":"27fc4746e1db1913a84903d9ef912507087ecce90b8ad55ea0c4ddb5efbfe999"} Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.121790 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.128383 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.128435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"3561a689-d524-495e-bd7f-81241339cfef","Type":"ContainerDied","Data":"043dd1b0565b8bb42650de00de460c1036ff6cb2501f029407cc703c8ca42787"} Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.128512 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043dd1b0565b8bb42650de00de460c1036ff6cb2501f029407cc703c8ca42787" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.154274 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" podStartSLOduration=121.154236503 podStartE2EDuration="2m1.154236503s" podCreationTimestamp="2026-01-04 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:12:49.147614404 +0000 UTC m=+143.136179510" watchObservedRunningTime="2026-01-04 00:12:49.154236503 +0000 UTC m=+143.142801589" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.699021 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.699598 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.699678 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-glcdh" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.700946 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.701048 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.702428 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"191206243a1a0f63cd7205d359d923c37cfe1f1de594f5553e0fe986f027155a"} pod="openshift-console/downloads-747b44746d-glcdh" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.702514 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" containerID="cri-o://191206243a1a0f63cd7205d359d923c37cfe1f1de594f5553e0fe986f027155a" gracePeriod=2 Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.860868 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:49 crc kubenswrapper[5108]: I0104 00:12:49.868904 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-shks7" Jan 04 00:12:50 crc kubenswrapper[5108]: I0104 00:12:50.137625 5108 generic.go:358] "Generic (PLEG): container finished" podID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerID="191206243a1a0f63cd7205d359d923c37cfe1f1de594f5553e0fe986f027155a" exitCode=0 Jan 04 00:12:50 crc kubenswrapper[5108]: I0104 00:12:50.137907 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-glcdh" event={"ID":"68f75634-8fb1-40a4-801d-6355d62d81f8","Type":"ContainerDied","Data":"191206243a1a0f63cd7205d359d923c37cfe1f1de594f5553e0fe986f027155a"} Jan 04 00:12:55 crc kubenswrapper[5108]: E0104 00:12:55.149465 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:12:55 crc kubenswrapper[5108]: E0104 00:12:55.153921 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:12:55 crc kubenswrapper[5108]: E0104 00:12:55.155781 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:12:55 crc kubenswrapper[5108]: E0104 00:12:55.155921 5108 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" podUID="14a3d6fe-b87f-473d-b105-d2cf34343253" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 04 00:12:59 crc kubenswrapper[5108]: I0104 00:12:59.702073 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:12:59 crc kubenswrapper[5108]: I0104 00:12:59.702826 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:13:01 crc kubenswrapper[5108]: I0104 00:13:01.216732 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvq52_14a3d6fe-b87f-473d-b105-d2cf34343253/kube-multus-additional-cni-plugins/0.log" Jan 04 00:13:01 crc kubenswrapper[5108]: I0104 00:13:01.217268 5108 generic.go:358] "Generic (PLEG): container finished" podID="14a3d6fe-b87f-473d-b105-d2cf34343253" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" exitCode=137 Jan 04 00:13:01 crc kubenswrapper[5108]: I0104 00:13:01.217373 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" event={"ID":"14a3d6fe-b87f-473d-b105-d2cf34343253","Type":"ContainerDied","Data":"fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e"} Jan 04 00:13:02 crc kubenswrapper[5108]: I0104 00:13:02.091160 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-nk4f2" Jan 04 00:13:04 crc kubenswrapper[5108]: I0104 00:13:04.792314 5108 ???:1] "http: TLS handshake error from 192.168.126.11:34076: no serving certificate available for the kubelet" Jan 04 00:13:05 crc kubenswrapper[5108]: E0104 00:13:05.134814 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e is running failed: container process not found" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:13:05 crc kubenswrapper[5108]: E0104 00:13:05.135413 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e is running failed: container process not found" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:13:05 crc kubenswrapper[5108]: E0104 00:13:05.135722 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e is running failed: container process not found" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 04 00:13:05 crc kubenswrapper[5108]: E0104 00:13:05.135761 5108 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" podUID="14a3d6fe-b87f-473d-b105-d2cf34343253" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 04 00:13:08 crc kubenswrapper[5108]: I0104 00:13:08.852763 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 04 00:13:09 crc kubenswrapper[5108]: I0104 00:13:09.703503 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:13:09 crc kubenswrapper[5108]: I0104 00:13:09.704292 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.115309 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.116250 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3561a689-d524-495e-bd7f-81241339cfef" containerName="pruner" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.116266 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3561a689-d524-495e-bd7f-81241339cfef" containerName="pruner" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.116277 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a0c6ba9-a7b4-42c9-8121-790c1d9cb024" containerName="collect-profiles" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.116284 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0c6ba9-a7b4-42c9-8121-790c1d9cb024" containerName="collect-profiles" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.116419 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2a0c6ba9-a7b4-42c9-8121-790c1d9cb024" containerName="collect-profiles" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.116433 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3561a689-d524-495e-bd7f-81241339cfef" containerName="pruner" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.409802 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.409950 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.410466 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.414429 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.414465 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.517671 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.518805 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.621541 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.621640 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.621648 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.644655 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:10 crc kubenswrapper[5108]: I0104 00:13:10.740771 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.216922 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvq52_14a3d6fe-b87f-473d-b105-d2cf34343253/kube-multus-additional-cni-plugins/0.log" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.221339 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.337259 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvq52_14a3d6fe-b87f-473d-b105-d2cf34343253/kube-multus-additional-cni-plugins/0.log" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.337663 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" event={"ID":"14a3d6fe-b87f-473d-b105-d2cf34343253","Type":"ContainerDied","Data":"2102134d8e6db15d5ff404098fbd961aedc0b73f7f3b7fec97d5582cd3a49f84"} Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.337691 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvq52" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.337705 5108 scope.go:117] "RemoveContainer" containerID="fa2162bb6d3e833287da0e2df8485f715aad6f664ae0a8481e3d7701cd19609e" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.376019 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/14a3d6fe-b87f-473d-b105-d2cf34343253-cni-sysctl-allowlist\") pod \"14a3d6fe-b87f-473d-b105-d2cf34343253\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.376139 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/14a3d6fe-b87f-473d-b105-d2cf34343253-ready\") pod \"14a3d6fe-b87f-473d-b105-d2cf34343253\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.376285 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14a3d6fe-b87f-473d-b105-d2cf34343253-tuning-conf-dir\") pod \"14a3d6fe-b87f-473d-b105-d2cf34343253\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.376320 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj42n\" (UniqueName: \"kubernetes.io/projected/14a3d6fe-b87f-473d-b105-d2cf34343253-kube-api-access-jj42n\") pod \"14a3d6fe-b87f-473d-b105-d2cf34343253\" (UID: \"14a3d6fe-b87f-473d-b105-d2cf34343253\") " Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.376881 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a3d6fe-b87f-473d-b105-d2cf34343253-ready" (OuterVolumeSpecName: "ready") pod "14a3d6fe-b87f-473d-b105-d2cf34343253" (UID: "14a3d6fe-b87f-473d-b105-d2cf34343253"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.376959 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a3d6fe-b87f-473d-b105-d2cf34343253-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "14a3d6fe-b87f-473d-b105-d2cf34343253" (UID: "14a3d6fe-b87f-473d-b105-d2cf34343253"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.377053 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a3d6fe-b87f-473d-b105-d2cf34343253-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "14a3d6fe-b87f-473d-b105-d2cf34343253" (UID: "14a3d6fe-b87f-473d-b105-d2cf34343253"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.386402 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a3d6fe-b87f-473d-b105-d2cf34343253-kube-api-access-jj42n" (OuterVolumeSpecName: "kube-api-access-jj42n") pod "14a3d6fe-b87f-473d-b105-d2cf34343253" (UID: "14a3d6fe-b87f-473d-b105-d2cf34343253"). InnerVolumeSpecName "kube-api-access-jj42n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.478296 5108 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14a3d6fe-b87f-473d-b105-d2cf34343253-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.478332 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jj42n\" (UniqueName: \"kubernetes.io/projected/14a3d6fe-b87f-473d-b105-d2cf34343253-kube-api-access-jj42n\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.478345 5108 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/14a3d6fe-b87f-473d-b105-d2cf34343253-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.478353 5108 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/14a3d6fe-b87f-473d-b105-d2cf34343253-ready\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.497166 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.497899 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14a3d6fe-b87f-473d-b105-d2cf34343253" containerName="kube-multus-additional-cni-plugins" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.497919 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a3d6fe-b87f-473d-b105-d2cf34343253" containerName="kube-multus-additional-cni-plugins" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.498064 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="14a3d6fe-b87f-473d-b105-d2cf34343253" containerName="kube-multus-additional-cni-plugins" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.513110 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.522774 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.662419 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.683328 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kube-api-access\") pod \"installer-12-crc\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.685120 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-var-lock\") pod \"installer-12-crc\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.685324 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kubelet-dir\") pod \"installer-12-crc\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.696411 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvq52"] Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.700362 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvq52"] Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.787397 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-var-lock\") pod \"installer-12-crc\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.787459 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kubelet-dir\") pod \"installer-12-crc\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.787531 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kube-api-access\") pod \"installer-12-crc\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.787606 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-var-lock\") pod \"installer-12-crc\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.787705 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kubelet-dir\") pod \"installer-12-crc\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.819561 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kube-api-access\") pod \"installer-12-crc\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:14 crc kubenswrapper[5108]: I0104 00:13:14.865603 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.362187 5108 generic.go:358] "Generic (PLEG): container finished" podID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerID="6718602829d7187e179e3d9a5a97cb615d69b68332d5b22facd1a7ce05049c18" exitCode=0 Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.362643 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpxsz" event={"ID":"d28a78c9-d785-4300-bbfe-580917daaeb7","Type":"ContainerDied","Data":"6718602829d7187e179e3d9a5a97cb615d69b68332d5b22facd1a7ce05049c18"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.369420 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n9gg" event={"ID":"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b","Type":"ContainerStarted","Data":"089d8cb77a5efda75d0ec33ded9ae530ccb5a434e6228a399617a056b62f4206"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.374676 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a","Type":"ContainerStarted","Data":"0541a7e555eb3ca36588d1c9255e91fd05ac74a2e07ae5fdd0b4d47b4e22299c"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.378968 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7zk" event={"ID":"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9","Type":"ContainerStarted","Data":"5e53147638ab0f0c4d4c046c92f2bc0f379082fe14fdb7fafe8eaef332d05f69"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.394223 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff989" event={"ID":"320a6eb9-3704-43c9-84b9-25580545ff50","Type":"ContainerStarted","Data":"88e7f0f780f8d738e221d255104188c91c0c16e6b4911749f1beff44e3ef308f"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.398578 5108 generic.go:358] "Generic (PLEG): container finished" podID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerID="fd0246f8c2b5444e71df9baf14add9a0cc95e817dcdd6f0c8dc48ba6ff041866" exitCode=0 Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.398648 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28926" event={"ID":"59b92be9-237e-4252-9bbe-a71908afb6e9","Type":"ContainerDied","Data":"fd0246f8c2b5444e71df9baf14add9a0cc95e817dcdd6f0c8dc48ba6ff041866"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.415716 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5bj8" event={"ID":"49f3cf98-e60e-4844-b59e-14d18c3d9559","Type":"ContainerStarted","Data":"e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.440062 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-glcdh" event={"ID":"68f75634-8fb1-40a4-801d-6355d62d81f8","Type":"ContainerStarted","Data":"8f567aa30b85b9f080e5f24b948a27cd214c1033cd6f3a5cd7b4e93e93c3e56b"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.464379 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9px8h" event={"ID":"a762f8cf-a77d-477e-8141-1bb1e02d8744","Type":"ContainerStarted","Data":"baee8ea5e4bf3524f6dc574001d38454d061cda8e6f6c1b44ad4e76fd7314bf9"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.508185 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clk26" event={"ID":"1aa34c52-ea52-42e1-a7b1-a6f22e32642b","Type":"ContainerStarted","Data":"a70dfe643b272d7f9dc01ec7b36f343f620526134afae5d30a766c5cf3270870"} Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.676386 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-glcdh" Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.679664 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:13:15 crc kubenswrapper[5108]: I0104 00:13:15.679739 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.244684 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.464344 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a3d6fe-b87f-473d-b105-d2cf34343253" path="/var/lib/kubelet/pods/14a3d6fe-b87f-473d-b105-d2cf34343253/volumes" Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.518710 5108 generic.go:358] "Generic (PLEG): container finished" podID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerID="089d8cb77a5efda75d0ec33ded9ae530ccb5a434e6228a399617a056b62f4206" exitCode=0 Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.518860 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n9gg" event={"ID":"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b","Type":"ContainerDied","Data":"089d8cb77a5efda75d0ec33ded9ae530ccb5a434e6228a399617a056b62f4206"} Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.522172 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a","Type":"ContainerStarted","Data":"74780b3bf9f07acbc6f6d91f55c9e7fb0a1aa657e2e96f0e755d1a48fd2ef1d1"} Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.526420 5108 generic.go:358] "Generic (PLEG): container finished" podID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerID="5e53147638ab0f0c4d4c046c92f2bc0f379082fe14fdb7fafe8eaef332d05f69" exitCode=0 Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.526537 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7zk" event={"ID":"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9","Type":"ContainerDied","Data":"5e53147638ab0f0c4d4c046c92f2bc0f379082fe14fdb7fafe8eaef332d05f69"} Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.536503 5108 generic.go:358] "Generic (PLEG): container finished" podID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerID="baee8ea5e4bf3524f6dc574001d38454d061cda8e6f6c1b44ad4e76fd7314bf9" exitCode=0 Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.537359 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9px8h" event={"ID":"a762f8cf-a77d-477e-8141-1bb1e02d8744","Type":"ContainerDied","Data":"baee8ea5e4bf3524f6dc574001d38454d061cda8e6f6c1b44ad4e76fd7314bf9"} Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.543155 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.543218 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:13:16 crc kubenswrapper[5108]: I0104 00:13:16.585400 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=6.585369518 podStartE2EDuration="6.585369518s" podCreationTimestamp="2026-01-04 00:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:13:16.583371634 +0000 UTC m=+170.571936720" watchObservedRunningTime="2026-01-04 00:13:16.585369518 +0000 UTC m=+170.573934604" Jan 04 00:13:17 crc kubenswrapper[5108]: I0104 00:13:17.551753 5108 generic.go:358] "Generic (PLEG): container finished" podID="ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a" containerID="74780b3bf9f07acbc6f6d91f55c9e7fb0a1aa657e2e96f0e755d1a48fd2ef1d1" exitCode=0 Jan 04 00:13:17 crc kubenswrapper[5108]: I0104 00:13:17.552049 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a","Type":"ContainerDied","Data":"74780b3bf9f07acbc6f6d91f55c9e7fb0a1aa657e2e96f0e755d1a48fd2ef1d1"} Jan 04 00:13:17 crc kubenswrapper[5108]: I0104 00:13:17.556580 5108 generic.go:358] "Generic (PLEG): container finished" podID="320a6eb9-3704-43c9-84b9-25580545ff50" containerID="88e7f0f780f8d738e221d255104188c91c0c16e6b4911749f1beff44e3ef308f" exitCode=0 Jan 04 00:13:17 crc kubenswrapper[5108]: I0104 00:13:17.556788 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff989" event={"ID":"320a6eb9-3704-43c9-84b9-25580545ff50","Type":"ContainerDied","Data":"88e7f0f780f8d738e221d255104188c91c0c16e6b4911749f1beff44e3ef308f"} Jan 04 00:13:17 crc kubenswrapper[5108]: I0104 00:13:17.566828 5108 generic.go:358] "Generic (PLEG): container finished" podID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerID="a70dfe643b272d7f9dc01ec7b36f343f620526134afae5d30a766c5cf3270870" exitCode=0 Jan 04 00:13:17 crc kubenswrapper[5108]: I0104 00:13:17.566946 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clk26" event={"ID":"1aa34c52-ea52-42e1-a7b1-a6f22e32642b","Type":"ContainerDied","Data":"a70dfe643b272d7f9dc01ec7b36f343f620526134afae5d30a766c5cf3270870"} Jan 04 00:13:17 crc kubenswrapper[5108]: I0104 00:13:17.570865 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:13:17 crc kubenswrapper[5108]: I0104 00:13:17.570963 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:13:17 crc kubenswrapper[5108]: W0104 00:13:17.655247 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc3c98488_aab3_45f2_8ada_d1dfcb4751a8.slice/crio-95ebee630e8eaf66bba665166f94ace21e2519aab27cea457863c16ca8da5b11 WatchSource:0}: Error finding container 95ebee630e8eaf66bba665166f94ace21e2519aab27cea457863c16ca8da5b11: Status 404 returned error can't find the container with id 95ebee630e8eaf66bba665166f94ace21e2519aab27cea457863c16ca8da5b11 Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.587898 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpxsz" event={"ID":"d28a78c9-d785-4300-bbfe-580917daaeb7","Type":"ContainerStarted","Data":"af0652253cfcf907c4112a70d8311252aebd9976a1eab822bf19292256c3765d"} Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.592614 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n9gg" event={"ID":"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b","Type":"ContainerStarted","Data":"00e6021627e6774a0334a14b7ef708181bd8f1212e7e8a13ae87a139a907ad15"} Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.596335 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7zk" event={"ID":"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9","Type":"ContainerStarted","Data":"8636ee73563ad110fbc2eac5ed930e22cea7e6321105b40333f92afcf42b2a52"} Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.598802 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff989" event={"ID":"320a6eb9-3704-43c9-84b9-25580545ff50","Type":"ContainerStarted","Data":"005d7f1259ee87a5c48eb4c0760a251d9d8ac557b66c6068c09ffdcbf0fc9e7d"} Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.601572 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28926" event={"ID":"59b92be9-237e-4252-9bbe-a71908afb6e9","Type":"ContainerStarted","Data":"1c94652f4eb48de437ab80613d6c6d88d7fc5730df4a2675ee1176295b319960"} Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.603778 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c3c98488-aab3-45f2-8ada-d1dfcb4751a8","Type":"ContainerStarted","Data":"5e242258e510e5bdd45a934a090b0b157ba252a3e226a996bef68eb2d513912d"} Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.603809 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c3c98488-aab3-45f2-8ada-d1dfcb4751a8","Type":"ContainerStarted","Data":"95ebee630e8eaf66bba665166f94ace21e2519aab27cea457863c16ca8da5b11"} Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.606221 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9px8h" event={"ID":"a762f8cf-a77d-477e-8141-1bb1e02d8744","Type":"ContainerStarted","Data":"b225f03b112e0d22962553b298643dc88720ab004a92ff7255b581f99ff76315"} Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.610719 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clk26" event={"ID":"1aa34c52-ea52-42e1-a7b1-a6f22e32642b","Type":"ContainerStarted","Data":"642ee9c6d8e729c1462d0c8131f631802a34755fb66293268c620a1cd67c6176"} Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.642880 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wpxsz" podStartSLOduration=7.163050965 podStartE2EDuration="43.64285964s" podCreationTimestamp="2026-01-04 00:12:35 +0000 UTC" firstStartedPulling="2026-01-04 00:12:37.688557907 +0000 UTC m=+131.677122993" lastFinishedPulling="2026-01-04 00:13:14.168366572 +0000 UTC m=+168.156931668" observedRunningTime="2026-01-04 00:13:18.616463877 +0000 UTC m=+172.605028963" watchObservedRunningTime="2026-01-04 00:13:18.64285964 +0000 UTC m=+172.631424726" Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.690954 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ff989" podStartSLOduration=8.145882605 podStartE2EDuration="46.690928747s" podCreationTimestamp="2026-01-04 00:12:32 +0000 UTC" firstStartedPulling="2026-01-04 00:12:35.623350451 +0000 UTC m=+129.611915537" lastFinishedPulling="2026-01-04 00:13:14.168396593 +0000 UTC m=+168.156961679" observedRunningTime="2026-01-04 00:13:18.662094639 +0000 UTC m=+172.650659745" watchObservedRunningTime="2026-01-04 00:13:18.690928747 +0000 UTC m=+172.679493833" Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.704859 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-28926" podStartSLOduration=7.23837631 podStartE2EDuration="43.704822752s" podCreationTimestamp="2026-01-04 00:12:35 +0000 UTC" firstStartedPulling="2026-01-04 00:12:37.700938054 +0000 UTC m=+131.689503140" lastFinishedPulling="2026-01-04 00:13:14.167384496 +0000 UTC m=+168.155949582" observedRunningTime="2026-01-04 00:13:18.697838374 +0000 UTC m=+172.686403460" watchObservedRunningTime="2026-01-04 00:13:18.704822752 +0000 UTC m=+172.693387838" Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.720114 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=4.720090294 podStartE2EDuration="4.720090294s" podCreationTimestamp="2026-01-04 00:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:13:18.718390849 +0000 UTC m=+172.706955935" watchObservedRunningTime="2026-01-04 00:13:18.720090294 +0000 UTC m=+172.708655380" Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.742436 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zs7zk" podStartSLOduration=8.186777364 podStartE2EDuration="45.742408327s" podCreationTimestamp="2026-01-04 00:12:33 +0000 UTC" firstStartedPulling="2026-01-04 00:12:36.663289954 +0000 UTC m=+130.651855040" lastFinishedPulling="2026-01-04 00:13:14.218920877 +0000 UTC m=+168.207486003" observedRunningTime="2026-01-04 00:13:18.734547284 +0000 UTC m=+172.723112390" watchObservedRunningTime="2026-01-04 00:13:18.742408327 +0000 UTC m=+172.730973413" Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.763009 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-clk26" podStartSLOduration=8.324535538 podStartE2EDuration="43.762977952s" podCreationTimestamp="2026-01-04 00:12:35 +0000 UTC" firstStartedPulling="2026-01-04 00:12:38.822653381 +0000 UTC m=+132.811218467" lastFinishedPulling="2026-01-04 00:13:14.261095795 +0000 UTC m=+168.249660881" observedRunningTime="2026-01-04 00:13:18.760105275 +0000 UTC m=+172.748670361" watchObservedRunningTime="2026-01-04 00:13:18.762977952 +0000 UTC m=+172.751543048" Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.786337 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5n9gg" podStartSLOduration=10.26739005 podStartE2EDuration="46.786304961s" podCreationTimestamp="2026-01-04 00:12:32 +0000 UTC" firstStartedPulling="2026-01-04 00:12:37.693535642 +0000 UTC m=+131.682100728" lastFinishedPulling="2026-01-04 00:13:14.212450533 +0000 UTC m=+168.201015639" observedRunningTime="2026-01-04 00:13:18.779609361 +0000 UTC m=+172.768174447" watchObservedRunningTime="2026-01-04 00:13:18.786304961 +0000 UTC m=+172.774870047" Jan 04 00:13:18 crc kubenswrapper[5108]: I0104 00:13:18.807989 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9px8h" podStartSLOduration=8.236274307 podStartE2EDuration="46.807964116s" podCreationTimestamp="2026-01-04 00:12:32 +0000 UTC" firstStartedPulling="2026-01-04 00:12:35.632021227 +0000 UTC m=+129.620586323" lastFinishedPulling="2026-01-04 00:13:14.203711046 +0000 UTC m=+168.192276132" observedRunningTime="2026-01-04 00:13:18.805362626 +0000 UTC m=+172.793927712" watchObservedRunningTime="2026-01-04 00:13:18.807964116 +0000 UTC m=+172.796529202" Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.068187 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.248794 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kubelet-dir\") pod \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\" (UID: \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\") " Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.248965 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a" (UID: "ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.249573 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kube-api-access\") pod \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\" (UID: \"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a\") " Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.249944 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.261860 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a" (UID: "ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.350890 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.617552 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a","Type":"ContainerDied","Data":"0541a7e555eb3ca36588d1c9255e91fd05ac74a2e07ae5fdd0b4d47b4e22299c"} Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.619076 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0541a7e555eb3ca36588d1c9255e91fd05ac74a2e07ae5fdd0b4d47b4e22299c" Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.619266 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.624251 5108 generic.go:358] "Generic (PLEG): container finished" podID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerID="e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821" exitCode=0 Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.624418 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5bj8" event={"ID":"49f3cf98-e60e-4844-b59e-14d18c3d9559","Type":"ContainerDied","Data":"e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821"} Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.705084 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:13:19 crc kubenswrapper[5108]: I0104 00:13:19.705186 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:13:20 crc kubenswrapper[5108]: I0104 00:13:20.687996 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5bj8" event={"ID":"49f3cf98-e60e-4844-b59e-14d18c3d9559","Type":"ContainerStarted","Data":"1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265"} Jan 04 00:13:23 crc kubenswrapper[5108]: I0104 00:13:23.007297 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ff989" Jan 04 00:13:23 crc kubenswrapper[5108]: I0104 00:13:23.007891 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-ff989" Jan 04 00:13:23 crc kubenswrapper[5108]: I0104 00:13:23.122292 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:13:23 crc kubenswrapper[5108]: I0104 00:13:23.122358 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:13:23 crc kubenswrapper[5108]: I0104 00:13:23.553892 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:13:23 crc kubenswrapper[5108]: I0104 00:13:23.553965 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:13:23 crc kubenswrapper[5108]: I0104 00:13:23.556663 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:13:23 crc kubenswrapper[5108]: I0104 00:13:23.556958 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.225690 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.267603 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ff989" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.272089 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.281414 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.291992 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z5bj8" podStartSLOduration=12.784063169 podStartE2EDuration="49.29195013s" podCreationTimestamp="2026-01-04 00:12:35 +0000 UTC" firstStartedPulling="2026-01-04 00:12:37.705832836 +0000 UTC m=+131.694397922" lastFinishedPulling="2026-01-04 00:13:14.213719797 +0000 UTC m=+168.202284883" observedRunningTime="2026-01-04 00:13:20.711234136 +0000 UTC m=+174.699799242" watchObservedRunningTime="2026-01-04 00:13:24.29195013 +0000 UTC m=+178.280515216" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.295254 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.346624 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ff989" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.458784 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.830870 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:13:24 crc kubenswrapper[5108]: I0104 00:13:24.834642 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5n9gg"] Jan 04 00:13:25 crc kubenswrapper[5108]: I0104 00:13:25.478582 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:13:25 crc kubenswrapper[5108]: I0104 00:13:25.478682 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:13:25 crc kubenswrapper[5108]: I0104 00:13:25.592695 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:13:25 crc kubenswrapper[5108]: I0104 00:13:25.745963 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5n9gg" podUID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerName="registry-server" containerID="cri-o://00e6021627e6774a0334a14b7ef708181bd8f1212e7e8a13ae87a139a907ad15" gracePeriod=2 Jan 04 00:13:25 crc kubenswrapper[5108]: I0104 00:13:25.826084 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:13:25 crc kubenswrapper[5108]: I0104 00:13:25.917924 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:13:25 crc kubenswrapper[5108]: I0104 00:13:25.917987 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:13:25 crc kubenswrapper[5108]: I0104 00:13:25.944084 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:13:25 crc kubenswrapper[5108]: I0104 00:13:25.944136 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:13:26 crc kubenswrapper[5108]: I0104 00:13:26.048604 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:13:26 crc kubenswrapper[5108]: I0104 00:13:26.267980 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:13:26 crc kubenswrapper[5108]: I0104 00:13:26.268670 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:13:26 crc kubenswrapper[5108]: I0104 00:13:26.843583 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.074568 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-clk26" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerName="registry-server" probeResult="failure" output=< Jan 04 00:13:27 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Jan 04 00:13:27 crc kubenswrapper[5108]: > Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.235053 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zs7zk"] Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.235979 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zs7zk" podUID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerName="registry-server" containerID="cri-o://8636ee73563ad110fbc2eac5ed930e22cea7e6321105b40333f92afcf42b2a52" gracePeriod=2 Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.313693 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z5bj8" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerName="registry-server" probeResult="failure" output=< Jan 04 00:13:27 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Jan 04 00:13:27 crc kubenswrapper[5108]: > Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.571172 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.571827 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.770157 5108 generic.go:358] "Generic (PLEG): container finished" podID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerID="00e6021627e6774a0334a14b7ef708181bd8f1212e7e8a13ae87a139a907ad15" exitCode=0 Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.770342 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n9gg" event={"ID":"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b","Type":"ContainerDied","Data":"00e6021627e6774a0334a14b7ef708181bd8f1212e7e8a13ae87a139a907ad15"} Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.772598 5108 generic.go:358] "Generic (PLEG): container finished" podID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerID="8636ee73563ad110fbc2eac5ed930e22cea7e6321105b40333f92afcf42b2a52" exitCode=0 Jan 04 00:13:27 crc kubenswrapper[5108]: I0104 00:13:27.773360 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7zk" event={"ID":"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9","Type":"ContainerDied","Data":"8636ee73563ad110fbc2eac5ed930e22cea7e6321105b40333f92afcf42b2a52"} Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.070955 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.216091 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-catalog-content\") pod \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.216349 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-utilities\") pod \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.217364 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-utilities" (OuterVolumeSpecName: "utilities") pod "bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" (UID: "bdc5ebfd-e3f3-4e8c-a845-91f1644e738b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.217390 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cnbk\" (UniqueName: \"kubernetes.io/projected/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-kube-api-access-2cnbk\") pod \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\" (UID: \"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b\") " Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.218589 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.226396 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-kube-api-access-2cnbk" (OuterVolumeSpecName: "kube-api-access-2cnbk") pod "bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" (UID: "bdc5ebfd-e3f3-4e8c-a845-91f1644e738b"). InnerVolumeSpecName "kube-api-access-2cnbk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.286690 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" (UID: "bdc5ebfd-e3f3-4e8c-a845-91f1644e738b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.319474 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2cnbk\" (UniqueName: \"kubernetes.io/projected/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-kube-api-access-2cnbk\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.319512 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.665462 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.726038 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-utilities\") pod \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.726133 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmr9w\" (UniqueName: \"kubernetes.io/projected/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-kube-api-access-gmr9w\") pod \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.726260 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-catalog-content\") pod \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\" (UID: \"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9\") " Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.726967 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-utilities" (OuterVolumeSpecName: "utilities") pod "1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" (UID: "1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.732830 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-kube-api-access-gmr9w" (OuterVolumeSpecName: "kube-api-access-gmr9w") pod "1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" (UID: "1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9"). InnerVolumeSpecName "kube-api-access-gmr9w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.788793 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n9gg" event={"ID":"bdc5ebfd-e3f3-4e8c-a845-91f1644e738b","Type":"ContainerDied","Data":"2fd33844376409ee6d662a930a1ba8389dbd2d664fdf986a91dd87ae14974966"} Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.788860 5108 scope.go:117] "RemoveContainer" containerID="00e6021627e6774a0334a14b7ef708181bd8f1212e7e8a13ae87a139a907ad15" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.789053 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n9gg" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.788762 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" (UID: "1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.795030 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs7zk" event={"ID":"1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9","Type":"ContainerDied","Data":"1dc63a1bd4e0d961f3eacfb051bcc1786949001d74709d7f44dc55c0bd0e6327"} Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.795367 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs7zk" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.820833 5108 scope.go:117] "RemoveContainer" containerID="089d8cb77a5efda75d0ec33ded9ae530ccb5a434e6228a399617a056b62f4206" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.828249 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.828845 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmr9w\" (UniqueName: \"kubernetes.io/projected/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-kube-api-access-gmr9w\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.828862 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.835665 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5n9gg"] Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.837116 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5n9gg"] Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.858298 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zs7zk"] Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.860799 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zs7zk"] Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.864953 5108 scope.go:117] "RemoveContainer" containerID="5f95447117aeeab24fb218fce592fd66dfc3b716535923c5ab957c9fa7f1b5db" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.880537 5108 scope.go:117] "RemoveContainer" containerID="8636ee73563ad110fbc2eac5ed930e22cea7e6321105b40333f92afcf42b2a52" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.904092 5108 scope.go:117] "RemoveContainer" containerID="5e53147638ab0f0c4d4c046c92f2bc0f379082fe14fdb7fafe8eaef332d05f69" Jan 04 00:13:28 crc kubenswrapper[5108]: I0104 00:13:28.936564 5108 scope.go:117] "RemoveContainer" containerID="cd59c850dc839dd29d57d9034d6abfb51d15df1f1d8f3b54277f3b39fa3c7cb4" Jan 04 00:13:29 crc kubenswrapper[5108]: I0104 00:13:29.637801 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpxsz"] Jan 04 00:13:29 crc kubenswrapper[5108]: I0104 00:13:29.638321 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wpxsz" podUID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerName="registry-server" containerID="cri-o://af0652253cfcf907c4112a70d8311252aebd9976a1eab822bf19292256c3765d" gracePeriod=2 Jan 04 00:13:29 crc kubenswrapper[5108]: I0104 00:13:29.698996 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:13:29 crc kubenswrapper[5108]: I0104 00:13:29.699111 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:13:30 crc kubenswrapper[5108]: I0104 00:13:30.463152 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" path="/var/lib/kubelet/pods/1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9/volumes" Jan 04 00:13:30 crc kubenswrapper[5108]: I0104 00:13:30.464594 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" path="/var/lib/kubelet/pods/bdc5ebfd-e3f3-4e8c-a845-91f1644e738b/volumes" Jan 04 00:13:31 crc kubenswrapper[5108]: I0104 00:13:31.821863 5108 generic.go:358] "Generic (PLEG): container finished" podID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerID="af0652253cfcf907c4112a70d8311252aebd9976a1eab822bf19292256c3765d" exitCode=0 Jan 04 00:13:31 crc kubenswrapper[5108]: I0104 00:13:31.822133 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpxsz" event={"ID":"d28a78c9-d785-4300-bbfe-580917daaeb7","Type":"ContainerDied","Data":"af0652253cfcf907c4112a70d8311252aebd9976a1eab822bf19292256c3765d"} Jan 04 00:13:32 crc kubenswrapper[5108]: I0104 00:13:32.834127 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpxsz" event={"ID":"d28a78c9-d785-4300-bbfe-580917daaeb7","Type":"ContainerDied","Data":"40397994e2beed64d7866dab282b8180765f569e76fefd07df1c5ea460d229cb"} Jan 04 00:13:32 crc kubenswrapper[5108]: I0104 00:13:32.834913 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40397994e2beed64d7866dab282b8180765f569e76fefd07df1c5ea460d229cb" Jan 04 00:13:32 crc kubenswrapper[5108]: I0104 00:13:32.875303 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.008008 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-utilities\") pod \"d28a78c9-d785-4300-bbfe-580917daaeb7\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.008227 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh47z\" (UniqueName: \"kubernetes.io/projected/d28a78c9-d785-4300-bbfe-580917daaeb7-kube-api-access-gh47z\") pod \"d28a78c9-d785-4300-bbfe-580917daaeb7\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.008340 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-catalog-content\") pod \"d28a78c9-d785-4300-bbfe-580917daaeb7\" (UID: \"d28a78c9-d785-4300-bbfe-580917daaeb7\") " Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.010258 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-utilities" (OuterVolumeSpecName: "utilities") pod "d28a78c9-d785-4300-bbfe-580917daaeb7" (UID: "d28a78c9-d785-4300-bbfe-580917daaeb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.027588 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d28a78c9-d785-4300-bbfe-580917daaeb7" (UID: "d28a78c9-d785-4300-bbfe-580917daaeb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.029957 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d28a78c9-d785-4300-bbfe-580917daaeb7-kube-api-access-gh47z" (OuterVolumeSpecName: "kube-api-access-gh47z") pod "d28a78c9-d785-4300-bbfe-580917daaeb7" (UID: "d28a78c9-d785-4300-bbfe-580917daaeb7"). InnerVolumeSpecName "kube-api-access-gh47z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.109908 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gh47z\" (UniqueName: \"kubernetes.io/projected/d28a78c9-d785-4300-bbfe-580917daaeb7-kube-api-access-gh47z\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.110420 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.110431 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d28a78c9-d785-4300-bbfe-580917daaeb7-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.840978 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpxsz" Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.875167 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpxsz"] Jan 04 00:13:33 crc kubenswrapper[5108]: I0104 00:13:33.878116 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpxsz"] Jan 04 00:13:34 crc kubenswrapper[5108]: I0104 00:13:34.458732 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d28a78c9-d785-4300-bbfe-580917daaeb7" path="/var/lib/kubelet/pods/d28a78c9-d785-4300-bbfe-580917daaeb7/volumes" Jan 04 00:13:36 crc kubenswrapper[5108]: I0104 00:13:36.084217 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:13:36 crc kubenswrapper[5108]: I0104 00:13:36.196349 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:13:36 crc kubenswrapper[5108]: I0104 00:13:36.308135 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:13:36 crc kubenswrapper[5108]: I0104 00:13:36.361984 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:13:37 crc kubenswrapper[5108]: I0104 00:13:37.573799 5108 patch_prober.go:28] interesting pod/downloads-747b44746d-glcdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 04 00:13:37 crc kubenswrapper[5108]: I0104 00:13:37.573885 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-glcdh" podUID="68f75634-8fb1-40a4-801d-6355d62d81f8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 04 00:13:39 crc kubenswrapper[5108]: I0104 00:13:39.236185 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5bj8"] Jan 04 00:13:39 crc kubenswrapper[5108]: I0104 00:13:39.237104 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z5bj8" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerName="registry-server" containerID="cri-o://1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265" gracePeriod=2 Jan 04 00:13:39 crc kubenswrapper[5108]: I0104 00:13:39.683177 5108 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-6nmg2 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 04 00:13:39 crc kubenswrapper[5108]: I0104 00:13:39.683305 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-68cf44c8b8-6nmg2" podUID="b46b2db9-9cd3-4bd2-aa59-7ba4e54949bd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.735710 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5bj8_49f3cf98-e60e-4844-b59e-14d18c3d9559/registry-server/0.log" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.741224 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.869638 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl26m\" (UniqueName: \"kubernetes.io/projected/49f3cf98-e60e-4844-b59e-14d18c3d9559-kube-api-access-nl26m\") pod \"49f3cf98-e60e-4844-b59e-14d18c3d9559\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.869847 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-utilities\") pod \"49f3cf98-e60e-4844-b59e-14d18c3d9559\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.869896 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-catalog-content\") pod \"49f3cf98-e60e-4844-b59e-14d18c3d9559\" (UID: \"49f3cf98-e60e-4844-b59e-14d18c3d9559\") " Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.871267 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-utilities" (OuterVolumeSpecName: "utilities") pod "49f3cf98-e60e-4844-b59e-14d18c3d9559" (UID: "49f3cf98-e60e-4844-b59e-14d18c3d9559"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.877349 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f3cf98-e60e-4844-b59e-14d18c3d9559-kube-api-access-nl26m" (OuterVolumeSpecName: "kube-api-access-nl26m") pod "49f3cf98-e60e-4844-b59e-14d18c3d9559" (UID: "49f3cf98-e60e-4844-b59e-14d18c3d9559"). InnerVolumeSpecName "kube-api-access-nl26m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.904380 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5bj8_49f3cf98-e60e-4844-b59e-14d18c3d9559/registry-server/0.log" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.905554 5108 generic.go:358] "Generic (PLEG): container finished" podID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerID="1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265" exitCode=137 Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.905923 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5bj8" event={"ID":"49f3cf98-e60e-4844-b59e-14d18c3d9559","Type":"ContainerDied","Data":"1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265"} Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.906003 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5bj8" event={"ID":"49f3cf98-e60e-4844-b59e-14d18c3d9559","Type":"ContainerDied","Data":"be2f9529fbf894f542845855bfe1e25aa7a53fb2f868e6506c7fa637d2b61d82"} Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.906034 5108 scope.go:117] "RemoveContainer" containerID="1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.905945 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5bj8" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.931165 5108 scope.go:117] "RemoveContainer" containerID="e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.957267 5108 scope.go:117] "RemoveContainer" containerID="4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.971798 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.971839 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nl26m\" (UniqueName: \"kubernetes.io/projected/49f3cf98-e60e-4844-b59e-14d18c3d9559-kube-api-access-nl26m\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.990469 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49f3cf98-e60e-4844-b59e-14d18c3d9559" (UID: "49f3cf98-e60e-4844-b59e-14d18c3d9559"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.996684 5108 scope.go:117] "RemoveContainer" containerID="1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265" Jan 04 00:13:42 crc kubenswrapper[5108]: E0104 00:13:42.997449 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265\": container with ID starting with 1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265 not found: ID does not exist" containerID="1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.997513 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265"} err="failed to get container status \"1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265\": rpc error: code = NotFound desc = could not find container \"1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265\": container with ID starting with 1082bc90f5fa7dcef2ef9afcea98fbfbcdfaab27b482e6f0f957ad0ae89fc265 not found: ID does not exist" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.997555 5108 scope.go:117] "RemoveContainer" containerID="e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821" Jan 04 00:13:42 crc kubenswrapper[5108]: E0104 00:13:42.998127 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821\": container with ID starting with e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821 not found: ID does not exist" containerID="e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.998160 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821"} err="failed to get container status \"e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821\": rpc error: code = NotFound desc = could not find container \"e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821\": container with ID starting with e807679b4d2691a71885a9fc77071fa0f4ac35eb7224d95c8009b6cb6ba33821 not found: ID does not exist" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.998186 5108 scope.go:117] "RemoveContainer" containerID="4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696" Jan 04 00:13:42 crc kubenswrapper[5108]: E0104 00:13:42.998674 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696\": container with ID starting with 4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696 not found: ID does not exist" containerID="4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696" Jan 04 00:13:42 crc kubenswrapper[5108]: I0104 00:13:42.998840 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696"} err="failed to get container status \"4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696\": rpc error: code = NotFound desc = could not find container \"4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696\": container with ID starting with 4818df206aa110065db6199226259b0e3988e9e707b36c790bd3753bd2bc4696 not found: ID does not exist" Jan 04 00:13:43 crc kubenswrapper[5108]: I0104 00:13:43.074674 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f3cf98-e60e-4844-b59e-14d18c3d9559-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:43 crc kubenswrapper[5108]: I0104 00:13:43.239482 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5bj8"] Jan 04 00:13:43 crc kubenswrapper[5108]: I0104 00:13:43.244080 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z5bj8"] Jan 04 00:13:44 crc kubenswrapper[5108]: I0104 00:13:44.466045 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" path="/var/lib/kubelet/pods/49f3cf98-e60e-4844-b59e-14d18c3d9559/volumes" Jan 04 00:13:45 crc kubenswrapper[5108]: I0104 00:13:45.785050 5108 ???:1] "http: TLS handshake error from 192.168.126.11:59266: no serving certificate available for the kubelet" Jan 04 00:13:47 crc kubenswrapper[5108]: I0104 00:13:47.576758 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-glcdh" Jan 04 00:13:48 crc kubenswrapper[5108]: I0104 00:13:48.111122 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-bxnjs"] Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.230835 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232673 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerName="extract-content" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232693 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerName="extract-content" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232703 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232711 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232727 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerName="extract-content" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232734 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerName="extract-content" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232745 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerName="extract-utilities" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232758 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerName="extract-utilities" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232778 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerName="extract-content" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232785 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerName="extract-content" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232799 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerName="extract-utilities" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232806 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerName="extract-utilities" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232815 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerName="extract-content" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232823 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerName="extract-content" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232832 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a" containerName="pruner" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232839 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a" containerName="pruner" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232853 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerName="extract-utilities" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232862 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerName="extract-utilities" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232874 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerName="extract-utilities" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232882 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerName="extract-utilities" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232893 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232903 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232915 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232925 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232935 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.232942 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.233067 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="49f3cf98-e60e-4844-b59e-14d18c3d9559" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.233081 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="bdc5ebfd-e3f3-4e8c-a845-91f1644e738b" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.233092 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="ba0b21ed-3cfe-4dc3-a793-1eeed1d2b87a" containerName="pruner" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.233105 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d28a78c9-d785-4300-bbfe-580917daaeb7" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.233115 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="1c1ab8f0-8eaf-4433-9c0c-1f7070910ee9" containerName="registry-server" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.255797 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.256085 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.256082 5108 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.256696 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://7b7d5d310358a9b842de277978eebe04b3dd67697935a4e7331293c8f2ce2c12" gracePeriod=15 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.256738 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b96c4a7615d0a65347b947faa43f2ce0466226b8e218fb7f926e49d834809fa9" gracePeriod=15 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.256811 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2d7a38395218096d15fda6992626e039e078f2bec25e625392f1b72f1fc46dcb" gracePeriod=15 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.256753 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://cf77409fe9a2a06b6cee539ab960b8ffe727a07751479e7c45e6314efc896193" gracePeriod=15 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257453 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b50433c05b4e9462bc1aeb26ab699177676176c7912e3f3701262c4c809e3cc2" gracePeriod=15 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257809 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257837 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257852 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257861 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257875 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257882 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257892 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257898 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257907 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257913 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257928 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257937 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257954 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257961 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257969 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257974 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257982 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257987 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.257996 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258001 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258101 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258116 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258123 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258130 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258142 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258155 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258170 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258412 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.258630 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.263574 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.285402 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.286030 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.286083 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.286143 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.286168 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.286292 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.286387 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.286409 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.286996 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.287280 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.354441 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: E0104 00:13:57.355856 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389062 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389139 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389171 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389311 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389449 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389620 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389646 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389699 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389751 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389767 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389808 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389845 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389907 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389932 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.389959 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.390046 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.390074 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.390051 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.390099 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.390436 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.657319 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:57 crc kubenswrapper[5108]: E0104 00:13:57.684828 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18875ed2abbde44b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:13:57.683700811 +0000 UTC m=+211.672265897,LastTimestamp:2026-01-04 00:13:57.683700811 +0000 UTC m=+211.672265897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.692423 5108 generic.go:358] "Generic (PLEG): container finished" podID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" containerID="5e242258e510e5bdd45a934a090b0b157ba252a3e226a996bef68eb2d513912d" exitCode=0 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.692538 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c3c98488-aab3-45f2-8ada-d1dfcb4751a8","Type":"ContainerDied","Data":"5e242258e510e5bdd45a934a090b0b157ba252a3e226a996bef68eb2d513912d"} Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.694094 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.695253 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.696433 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.697085 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b96c4a7615d0a65347b947faa43f2ce0466226b8e218fb7f926e49d834809fa9" exitCode=0 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.697114 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2d7a38395218096d15fda6992626e039e078f2bec25e625392f1b72f1fc46dcb" exitCode=0 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.697123 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cf77409fe9a2a06b6cee539ab960b8ffe727a07751479e7c45e6314efc896193" exitCode=0 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.697133 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b50433c05b4e9462bc1aeb26ab699177676176c7912e3f3701262c4c809e3cc2" exitCode=2 Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.697194 5108 scope.go:117] "RemoveContainer" containerID="001488f02f298ecdbad61e43398fbbe845d04526ab076c51dc377df80bfbc40e" Jan 04 00:13:57 crc kubenswrapper[5108]: I0104 00:13:57.698581 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"532d06a3e97ee7e3cc8b36c280cfbc662886b870b9b669dd91a53632036cc185"} Jan 04 00:13:58 crc kubenswrapper[5108]: I0104 00:13:58.706725 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878"} Jan 04 00:13:58 crc kubenswrapper[5108]: I0104 00:13:58.707036 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:58 crc kubenswrapper[5108]: I0104 00:13:58.708227 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:13:58 crc kubenswrapper[5108]: E0104 00:13:58.708913 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:58 crc kubenswrapper[5108]: I0104 00:13:58.711132 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 04 00:13:58 crc kubenswrapper[5108]: I0104 00:13:58.968256 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:58 crc kubenswrapper[5108]: I0104 00:13:58.969518 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.019618 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-var-lock\") pod \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.019744 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kube-api-access\") pod \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.019951 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kubelet-dir\") pod \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\" (UID: \"c3c98488-aab3-45f2-8ada-d1dfcb4751a8\") " Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.020345 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-var-lock" (OuterVolumeSpecName: "var-lock") pod "c3c98488-aab3-45f2-8ada-d1dfcb4751a8" (UID: "c3c98488-aab3-45f2-8ada-d1dfcb4751a8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.020476 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c3c98488-aab3-45f2-8ada-d1dfcb4751a8" (UID: "c3c98488-aab3-45f2-8ada-d1dfcb4751a8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.029252 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c3c98488-aab3-45f2-8ada-d1dfcb4751a8" (UID: "c3c98488-aab3-45f2-8ada-d1dfcb4751a8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.122399 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-var-lock\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.122892 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.122988 5108 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3c98488-aab3-45f2-8ada-d1dfcb4751a8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:13:59 crc kubenswrapper[5108]: E0104 00:13:59.367447 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18875ed2abbde44b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:13:57.683700811 +0000 UTC m=+211.672265897,LastTimestamp:2026-01-04 00:13:57.683700811 +0000 UTC m=+211.672265897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.722133 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.722123 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c3c98488-aab3-45f2-8ada-d1dfcb4751a8","Type":"ContainerDied","Data":"95ebee630e8eaf66bba665166f94ace21e2519aab27cea457863c16ca8da5b11"} Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.722317 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ebee630e8eaf66bba665166f94ace21e2519aab27cea457863c16ca8da5b11" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.725680 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.726516 5108 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7b7d5d310358a9b842de277978eebe04b3dd67697935a4e7331293c8f2ce2c12" exitCode=0 Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.726908 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:59 crc kubenswrapper[5108]: E0104 00:13:59.727417 5108 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:13:59 crc kubenswrapper[5108]: I0104 00:13:59.737136 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.168121 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.172913 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.174077 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.174857 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.242764 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.242843 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.242985 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.243089 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.243122 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.243121 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.243162 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.243233 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.243374 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.243394 5108 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.243402 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.244486 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.247407 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.345089 5108 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.345704 5108 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.458752 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.737891 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.739732 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.739761 5108 scope.go:117] "RemoveContainer" containerID="b96c4a7615d0a65347b947faa43f2ce0466226b8e218fb7f926e49d834809fa9" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.740815 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.742232 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.746074 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.746293 5108 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.763208 5108 scope.go:117] "RemoveContainer" containerID="2d7a38395218096d15fda6992626e039e078f2bec25e625392f1b72f1fc46dcb" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.781451 5108 scope.go:117] "RemoveContainer" containerID="cf77409fe9a2a06b6cee539ab960b8ffe727a07751479e7c45e6314efc896193" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.799095 5108 scope.go:117] "RemoveContainer" containerID="b50433c05b4e9462bc1aeb26ab699177676176c7912e3f3701262c4c809e3cc2" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.831703 5108 scope.go:117] "RemoveContainer" containerID="7b7d5d310358a9b842de277978eebe04b3dd67697935a4e7331293c8f2ce2c12" Jan 04 00:14:00 crc kubenswrapper[5108]: I0104 00:14:00.851928 5108 scope.go:117] "RemoveContainer" containerID="76bbcaf7c19eae97cabab72b1af9ee18fd88354943af8dab060b9ab39179242a" Jan 04 00:14:01 crc kubenswrapper[5108]: E0104 00:14:01.546722 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:01 crc kubenswrapper[5108]: E0104 00:14:01.547666 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:01 crc kubenswrapper[5108]: E0104 00:14:01.548903 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:01 crc kubenswrapper[5108]: E0104 00:14:01.549409 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:01 crc kubenswrapper[5108]: E0104 00:14:01.549884 5108 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:01 crc kubenswrapper[5108]: I0104 00:14:01.549937 5108 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 04 00:14:01 crc kubenswrapper[5108]: E0104 00:14:01.550424 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Jan 04 00:14:01 crc kubenswrapper[5108]: E0104 00:14:01.751377 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Jan 04 00:14:02 crc kubenswrapper[5108]: E0104 00:14:02.152251 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Jan 04 00:14:02 crc kubenswrapper[5108]: E0104 00:14:02.953750 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Jan 04 00:14:04 crc kubenswrapper[5108]: E0104 00:14:04.555809 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="3.2s" Jan 04 00:14:06 crc kubenswrapper[5108]: I0104 00:14:06.455450 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:07 crc kubenswrapper[5108]: E0104 00:14:07.757550 5108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="6.4s" Jan 04 00:14:09 crc kubenswrapper[5108]: E0104 00:14:09.369355 5108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18875ed2abbde44b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-04 00:13:57.683700811 +0000 UTC m=+211.672265897,LastTimestamp:2026-01-04 00:13:57.683700811 +0000 UTC m=+211.672265897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 04 00:14:10 crc kubenswrapper[5108]: I0104 00:14:10.448709 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:10 crc kubenswrapper[5108]: I0104 00:14:10.450034 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:10 crc kubenswrapper[5108]: I0104 00:14:10.471088 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:10 crc kubenswrapper[5108]: I0104 00:14:10.471154 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:10 crc kubenswrapper[5108]: E0104 00:14:10.471923 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:10 crc kubenswrapper[5108]: I0104 00:14:10.472387 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:10 crc kubenswrapper[5108]: I0104 00:14:10.819705 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"39ae6b5b9ae0959058f20ebb4b5103b546bfd73cc54bd7db5388a6a52258c957"} Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.398050 5108 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.398150 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.838005 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.838065 5108 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="e4871dd57f0ecd21f2d7f2b64f2493a0612dd77b89b0feeff7852b3ea1421b33" exitCode=1 Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.838193 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"e4871dd57f0ecd21f2d7f2b64f2493a0612dd77b89b0feeff7852b3ea1421b33"} Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.839116 5108 scope.go:117] "RemoveContainer" containerID="e4871dd57f0ecd21f2d7f2b64f2493a0612dd77b89b0feeff7852b3ea1421b33" Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.839870 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.840504 5108 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.840958 5108 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="87d01e57d01bdc38d854aeb1aa8eb6b02edaf34bf8e624e9a87a24fdfc6dc0c6" exitCode=0 Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.841250 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"87d01e57d01bdc38d854aeb1aa8eb6b02edaf34bf8e624e9a87a24fdfc6dc0c6"} Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.841522 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.841542 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:12 crc kubenswrapper[5108]: E0104 00:14:12.842179 5108 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.842178 5108 status_manager.go:895] "Failed to get status for pod" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:12 crc kubenswrapper[5108]: I0104 00:14:12.842582 5108 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.155092 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" podUID="0ed21f10-7015-400b-bd89-9b5ba497be04" containerName="oauth-openshift" containerID="cri-o://1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781" gracePeriod=15 Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.575165 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.635332 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.852562 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3d837257283b1a7f47ed77a23eaaa89517a73518cef40efbc2a47e2e9c4348d1"} Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.852625 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3975e075916f152edee5b5c481e621270d9d9dc26540f6d68d9e9eb3b7a9ab44"} Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.852640 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"be8a11d68ef090972c123f780f9cf71d745ba63f7ee8c39792852eafdd39cbc5"} Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.855056 5108 generic.go:358] "Generic (PLEG): container finished" podID="0ed21f10-7015-400b-bd89-9b5ba497be04" containerID="1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781" exitCode=0 Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.855148 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" event={"ID":"0ed21f10-7015-400b-bd89-9b5ba497be04","Type":"ContainerDied","Data":"1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781"} Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.855167 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" event={"ID":"0ed21f10-7015-400b-bd89-9b5ba497be04","Type":"ContainerDied","Data":"e3cbca7b7073d07773ddebb451843f317eaed2d3c6976b7e16cf90380d2c3c84"} Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.855190 5108 scope.go:117] "RemoveContainer" containerID="1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781" Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.855413 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-bxnjs" Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.872543 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.872884 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b2ef02dcae48ff795697b4abf915d6bdc7def9702aeb2b1165cc99a1657ec64d"} Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.909591 5108 scope.go:117] "RemoveContainer" containerID="1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781" Jan 04 00:14:13 crc kubenswrapper[5108]: E0104 00:14:13.910315 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781\": container with ID starting with 1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781 not found: ID does not exist" containerID="1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781" Jan 04 00:14:13 crc kubenswrapper[5108]: I0104 00:14:13.910365 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781"} err="failed to get container status \"1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781\": rpc error: code = NotFound desc = could not find container \"1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781\": container with ID starting with 1e37aefd7ab5f07f549e53d82a601add01180aa1d9ee58b853f5712a7d4ff781 not found: ID does not exist" Jan 04 00:14:14 crc kubenswrapper[5108]: I0104 00:14:14.884730 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"46b14893bf36d37c38d5edbb8194f46813e1898860d093c73f19ed1803767647"} Jan 04 00:14:14 crc kubenswrapper[5108]: I0104 00:14:14.885261 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a6dc3924e4b13de707339aeb800b69a1293d4cc096b525131fe12875f8bbf777"} Jan 04 00:14:14 crc kubenswrapper[5108]: I0104 00:14:14.885421 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:14 crc kubenswrapper[5108]: I0104 00:14:14.885458 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:14 crc kubenswrapper[5108]: I0104 00:14:14.885476 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:15 crc kubenswrapper[5108]: I0104 00:14:15.472938 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:15 crc kubenswrapper[5108]: I0104 00:14:15.473539 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:15 crc kubenswrapper[5108]: I0104 00:14:15.482752 5108 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]log ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]etcd ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/generic-apiserver-start-informers ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/priority-and-fairness-filter ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-apiextensions-informers ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-apiextensions-controllers ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/crd-informer-synced ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-system-namespaces-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 04 00:14:15 crc kubenswrapper[5108]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 04 00:14:15 crc kubenswrapper[5108]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/bootstrap-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/start-kube-aggregator-informers ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/apiservice-registration-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/apiservice-discovery-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]autoregister-completion ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/apiservice-openapi-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 04 00:14:15 crc kubenswrapper[5108]: livez check failed Jan 04 00:14:15 crc kubenswrapper[5108]: I0104 00:14:15.482856 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57755cc5f99000cc11e193051474d4e2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.879312 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-router-certs\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.879968 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-serving-cert\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880019 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-cliconfig\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880103 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf6hj\" (UniqueName: \"kubernetes.io/projected/0ed21f10-7015-400b-bd89-9b5ba497be04-kube-api-access-zf6hj\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880157 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-idp-0-file-data\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880186 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-provider-selection\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880240 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-dir\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880288 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-service-ca\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880311 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-error\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880339 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-session\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880373 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-trusted-ca-bundle\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880458 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-login\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880488 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-ocp-branding-template\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.880555 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-policies\") pod \"0ed21f10-7015-400b-bd89-9b5ba497be04\" (UID: \"0ed21f10-7015-400b-bd89-9b5ba497be04\") " Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.882646 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.883247 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.883296 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.883310 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.883624 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.895132 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.908176 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.908283 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ed21f10-7015-400b-bd89-9b5ba497be04-kube-api-access-zf6hj" (OuterVolumeSpecName: "kube-api-access-zf6hj") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "kube-api-access-zf6hj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.912513 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.913049 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.913502 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.913866 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.915231 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.915828 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0ed21f10-7015-400b-bd89-9b5ba497be04" (UID: "0ed21f10-7015-400b-bd89-9b5ba497be04"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.925542 5108 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.925578 5108 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.929397 5108 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="pods \"kube-apiserver-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.935374 5108 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="pods \"kube-apiserver-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982524 5108 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982581 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982593 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982606 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982617 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zf6hj\" (UniqueName: \"kubernetes.io/projected/0ed21f10-7015-400b-bd89-9b5ba497be04-kube-api-access-zf6hj\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982628 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982643 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982658 5108 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ed21f10-7015-400b-bd89-9b5ba497be04-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982667 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982680 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982689 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982698 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982749 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:19 crc kubenswrapper[5108]: I0104 00:14:19.982766 5108 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ed21f10-7015-400b-bd89-9b5ba497be04-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:20 crc kubenswrapper[5108]: I0104 00:14:20.194213 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="d9aff2ad-4204-4bfa-b0a3-d89eb61c6b42" Jan 04 00:14:20 crc kubenswrapper[5108]: E0104 00:14:20.615683 5108 reflector.go:200] "Failed to watch" err="configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" type="*v1.ConfigMap" Jan 04 00:14:20 crc kubenswrapper[5108]: I0104 00:14:20.941632 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:20 crc kubenswrapper[5108]: I0104 00:14:20.941674 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:20 crc kubenswrapper[5108]: I0104 00:14:20.945725 5108 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="d9aff2ad-4204-4bfa-b0a3-d89eb61c6b42" Jan 04 00:14:21 crc kubenswrapper[5108]: I0104 00:14:21.318023 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:14:21 crc kubenswrapper[5108]: I0104 00:14:21.323048 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:14:21 crc kubenswrapper[5108]: I0104 00:14:21.946721 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:14:24 crc kubenswrapper[5108]: I0104 00:14:24.917383 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:14:24 crc kubenswrapper[5108]: I0104 00:14:24.918262 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:14:29 crc kubenswrapper[5108]: I0104 00:14:29.797979 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 04 00:14:30 crc kubenswrapper[5108]: I0104 00:14:30.359997 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:30 crc kubenswrapper[5108]: I0104 00:14:30.476799 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 04 00:14:30 crc kubenswrapper[5108]: I0104 00:14:30.588192 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 04 00:14:31 crc kubenswrapper[5108]: I0104 00:14:31.000341 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 04 00:14:31 crc kubenswrapper[5108]: I0104 00:14:31.264436 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 04 00:14:31 crc kubenswrapper[5108]: I0104 00:14:31.291502 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 04 00:14:31 crc kubenswrapper[5108]: I0104 00:14:31.486513 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 04 00:14:31 crc kubenswrapper[5108]: I0104 00:14:31.732983 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.000898 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.009451 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.102061 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.125049 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.154242 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.239768 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.383178 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.384977 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.421656 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.438352 5108 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.477602 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.834353 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 04 00:14:32 crc kubenswrapper[5108]: I0104 00:14:32.960975 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.013331 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.068364 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.165807 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.289144 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.302065 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.303069 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.401410 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.424738 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.429585 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.477947 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.500943 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.559054 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.581023 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.584328 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.592188 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.664822 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.676495 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.694461 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.741853 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.916589 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 04 00:14:33 crc kubenswrapper[5108]: I0104 00:14:33.979680 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.135126 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.223441 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.248025 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.294589 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.403599 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.509342 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.568247 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.575176 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.590388 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.737821 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.737958 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.748631 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.748665 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.977127 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 04 00:14:34 crc kubenswrapper[5108]: I0104 00:14:34.980112 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.017066 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.036236 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.038505 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.153371 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.250258 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.331674 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.354099 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.357795 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.414773 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.456628 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.494242 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.498331 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.620095 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.641691 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.669982 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.675221 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.685041 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.849936 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.876336 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.923257 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 04 00:14:35 crc kubenswrapper[5108]: I0104 00:14:35.935254 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.196011 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.272173 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.332555 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.365991 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.419644 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.431296 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.502803 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.544565 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.577309 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.605848 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.651931 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.757822 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.760365 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.777881 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.781277 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.808444 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 04 00:14:36 crc kubenswrapper[5108]: I0104 00:14:36.976601 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.318456 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.381184 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.422842 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.447548 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.499501 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.517703 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.545135 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.577270 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.622295 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.738106 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.776806 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.925575 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 04 00:14:37 crc kubenswrapper[5108]: I0104 00:14:37.929992 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.015313 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.039736 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.042815 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.075492 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.093616 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.159994 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.187016 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.290782 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.309582 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.358340 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.403288 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.465018 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.699352 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.815085 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.876627 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.898022 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.943934 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 04 00:14:38 crc kubenswrapper[5108]: I0104 00:14:38.959973 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.069109 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.115681 5108 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.132466 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.137404 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.219134 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.226338 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.274024 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.286573 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.587995 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.603006 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.667787 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.795837 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.824937 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:39 crc kubenswrapper[5108]: I0104 00:14:39.879724 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.026595 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.034454 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.126902 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.137663 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.186838 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.252066 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.252155 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.286886 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.321964 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.347055 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.354599 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.496542 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.604029 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.625521 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.643829 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.741298 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.746181 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.812497 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.813635 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.874774 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.911146 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.928294 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 04 00:14:40 crc kubenswrapper[5108]: I0104 00:14:40.981825 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.053516 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.060901 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.066186 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.112633 5108 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.113176 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.118424 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-bxnjs"] Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.118518 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt","openshift-kube-apiserver/kube-apiserver-crc"] Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.119094 5108 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.119126 5108 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1581284b-5ee5-493b-8401-025c4348876e" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.119228 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ed21f10-7015-400b-bd89-9b5ba497be04" containerName="oauth-openshift" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.119244 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ed21f10-7015-400b-bd89-9b5ba497be04" containerName="oauth-openshift" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.119282 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" containerName="installer" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.119289 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" containerName="installer" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.119391 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ed21f10-7015-400b-bd89-9b5ba497be04" containerName="oauth-openshift" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.119402 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c3c98488-aab3-45f2-8ada-d1dfcb4751a8" containerName="installer" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.237608 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.277638 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.317674 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.361793 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.365678 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.365793 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.368594 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.370065 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.370269 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.370322 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.370332 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.370338 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.371227 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.371486 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.371612 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.371776 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.371852 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.371910 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.373695 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.380828 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.386064 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.392991 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.396496 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.39646957 podStartE2EDuration="22.39646957s" podCreationTimestamp="2026-01-04 00:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:14:41.396332657 +0000 UTC m=+255.384897763" watchObservedRunningTime="2026-01-04 00:14:41.39646957 +0000 UTC m=+255.385034656" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.416570 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.457385 5108 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.464157 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.507858 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.507916 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-template-error\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.507974 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-audit-dir\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508018 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9klbw\" (UniqueName: \"kubernetes.io/projected/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-kube-api-access-9klbw\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508061 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508084 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508115 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508135 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508157 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-audit-policies\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508226 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508362 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508527 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-template-login\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508614 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-session\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.508808 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.604808 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610365 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610430 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610460 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610489 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610513 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-audit-policies\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610557 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610582 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610861 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-template-login\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610904 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-session\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610928 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610959 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.610985 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-template-error\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.611057 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-audit-dir\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.611093 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9klbw\" (UniqueName: \"kubernetes.io/projected/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-kube-api-access-9klbw\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.611425 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-audit-dir\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.612910 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.612936 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.612905 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-audit-policies\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.615856 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.619819 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-template-error\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.620342 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.620649 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-template-login\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.620944 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.621331 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-session\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.622662 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.630534 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.633935 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.636495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9klbw\" (UniqueName: \"kubernetes.io/projected/3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe-kube-api-access-9klbw\") pod \"oauth-openshift-7c8f88d8dd-hngdt\" (UID: \"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe\") " pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.676457 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.688960 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.703875 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.743294 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.803067 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.856833 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.863389 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.938232 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.955532 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 04 00:14:41 crc kubenswrapper[5108]: I0104 00:14:41.975160 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.004737 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.049226 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.056829 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.127339 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.127805 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt"] Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.145020 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.196271 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.339084 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.458402 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ed21f10-7015-400b-bd89-9b5ba497be04" path="/var/lib/kubelet/pods/0ed21f10-7015-400b-bd89-9b5ba497be04/volumes" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.527517 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.542468 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.565384 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.628604 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.671919 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.717911 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.869535 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.895691 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 04 00:14:42 crc kubenswrapper[5108]: I0104 00:14:42.990728 5108 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.036794 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.084035 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" event={"ID":"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe","Type":"ContainerStarted","Data":"3c937b1f302fdf9973caf24476be091c38c5829d70279bc41c6f10f2dcc9e2a7"} Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.165981 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.253911 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.388567 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.396455 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.434727 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.531890 5108 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.566939 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.620678 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.731783 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.765880 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.803941 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.899320 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 04 00:14:43 crc kubenswrapper[5108]: I0104 00:14:43.905166 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 04 00:14:44 crc kubenswrapper[5108]: I0104 00:14:44.016734 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:44 crc kubenswrapper[5108]: I0104 00:14:44.026665 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:14:44 crc kubenswrapper[5108]: I0104 00:14:44.377501 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 04 00:14:44 crc kubenswrapper[5108]: I0104 00:14:44.476008 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 04 00:14:44 crc kubenswrapper[5108]: I0104 00:14:44.652438 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 04 00:14:44 crc kubenswrapper[5108]: I0104 00:14:44.664146 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 04 00:14:44 crc kubenswrapper[5108]: I0104 00:14:44.890296 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 04 00:14:44 crc kubenswrapper[5108]: I0104 00:14:44.922313 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.173414 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.201507 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.314385 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.445020 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.479653 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.563857 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.654139 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.766812 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.786656 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 04 00:14:45 crc kubenswrapper[5108]: I0104 00:14:45.855201 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 04 00:14:46 crc kubenswrapper[5108]: I0104 00:14:46.305488 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 04 00:14:46 crc kubenswrapper[5108]: I0104 00:14:46.317295 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 04 00:14:46 crc kubenswrapper[5108]: I0104 00:14:46.576015 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 04 00:14:47 crc kubenswrapper[5108]: I0104 00:14:47.109650 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" event={"ID":"3b45c7d9-47b1-46b4-ae65-5d135ffb4bfe","Type":"ContainerStarted","Data":"c8bb8eaa8ff668735692f3b5d2fb81157ef1e2ba2cb7491816a7253a1b66d8ff"} Jan 04 00:14:47 crc kubenswrapper[5108]: I0104 00:14:47.109889 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:47 crc kubenswrapper[5108]: I0104 00:14:47.134672 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" podStartSLOduration=59.134647927 podStartE2EDuration="59.134647927s" podCreationTimestamp="2026-01-04 00:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:14:47.130906617 +0000 UTC m=+261.119471723" watchObservedRunningTime="2026-01-04 00:14:47.134647927 +0000 UTC m=+261.123213013" Jan 04 00:14:47 crc kubenswrapper[5108]: I0104 00:14:47.488906 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7c8f88d8dd-hngdt" Jan 04 00:14:52 crc kubenswrapper[5108]: I0104 00:14:52.369378 5108 ???:1] "http: TLS handshake error from 192.168.126.11:57518: no serving certificate available for the kubelet" Jan 04 00:14:53 crc kubenswrapper[5108]: I0104 00:14:53.827365 5108 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 04 00:14:53 crc kubenswrapper[5108]: I0104 00:14:53.827804 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878" gracePeriod=5 Jan 04 00:14:54 crc kubenswrapper[5108]: I0104 00:14:54.917756 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:14:54 crc kubenswrapper[5108]: I0104 00:14:54.917859 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.411680 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.412319 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.414138 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.471414 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.472064 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.471738 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.472115 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.472399 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.472457 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.472866 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.473006 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.473121 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.473516 5108 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.473625 5108 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.473704 5108 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.473775 5108 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.486572 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.575926 5108 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.852051 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.852522 5108 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878" exitCode=137 Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.852676 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.852778 5108 scope.go:117] "RemoveContainer" containerID="c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.879234 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.882052 5108 scope.go:117] "RemoveContainer" containerID="c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878" Jan 04 00:14:59 crc kubenswrapper[5108]: E0104 00:14:59.882682 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878\": container with ID starting with c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878 not found: ID does not exist" containerID="c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878" Jan 04 00:14:59 crc kubenswrapper[5108]: I0104 00:14:59.882753 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878"} err="failed to get container status \"c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878\": rpc error: code = NotFound desc = could not find container \"c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878\": container with ID starting with c01e326a371703eccba6e61b4976415935710bf49db77b694f886b9b03713878 not found: ID does not exist" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.169488 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb"] Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.170263 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.170284 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.170422 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.178060 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.181714 5108 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.181712 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.182132 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.185496 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb"] Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.287588 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-secret-volume\") pod \"collect-profiles-29458095-nw6fb\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.287666 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsgx8\" (UniqueName: \"kubernetes.io/projected/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-kube-api-access-zsgx8\") pod \"collect-profiles-29458095-nw6fb\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.288100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-config-volume\") pod \"collect-profiles-29458095-nw6fb\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.389607 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-secret-volume\") pod \"collect-profiles-29458095-nw6fb\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.389872 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zsgx8\" (UniqueName: \"kubernetes.io/projected/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-kube-api-access-zsgx8\") pod \"collect-profiles-29458095-nw6fb\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.390229 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-config-volume\") pod \"collect-profiles-29458095-nw6fb\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.391596 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-config-volume\") pod \"collect-profiles-29458095-nw6fb\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.395876 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-secret-volume\") pod \"collect-profiles-29458095-nw6fb\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.409717 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsgx8\" (UniqueName: \"kubernetes.io/projected/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-kube-api-access-zsgx8\") pod \"collect-profiles-29458095-nw6fb\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.458354 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.508166 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.741771 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb"] Jan 04 00:15:00 crc kubenswrapper[5108]: I0104 00:15:00.861851 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" event={"ID":"5e763e9e-23c0-4a7b-aac3-43cd67ba201f","Type":"ContainerStarted","Data":"0449ca80833fca435b24a1fdd959a74531035bb7930922b3c64ee99c3296798f"} Jan 04 00:15:01 crc kubenswrapper[5108]: I0104 00:15:01.872933 5108 generic.go:358] "Generic (PLEG): container finished" podID="5e763e9e-23c0-4a7b-aac3-43cd67ba201f" containerID="557db6896baee51c6fa2739e16a953d67cda580340b3fd7de765effc229c298d" exitCode=0 Jan 04 00:15:01 crc kubenswrapper[5108]: I0104 00:15:01.873056 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" event={"ID":"5e763e9e-23c0-4a7b-aac3-43cd67ba201f","Type":"ContainerDied","Data":"557db6896baee51c6fa2739e16a953d67cda580340b3fd7de765effc229c298d"} Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.108729 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.234794 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-config-volume\") pod \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.234911 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsgx8\" (UniqueName: \"kubernetes.io/projected/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-kube-api-access-zsgx8\") pod \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.234945 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-secret-volume\") pod \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\" (UID: \"5e763e9e-23c0-4a7b-aac3-43cd67ba201f\") " Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.236272 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-config-volume" (OuterVolumeSpecName: "config-volume") pod "5e763e9e-23c0-4a7b-aac3-43cd67ba201f" (UID: "5e763e9e-23c0-4a7b-aac3-43cd67ba201f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.244771 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5e763e9e-23c0-4a7b-aac3-43cd67ba201f" (UID: "5e763e9e-23c0-4a7b-aac3-43cd67ba201f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.245129 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-kube-api-access-zsgx8" (OuterVolumeSpecName: "kube-api-access-zsgx8") pod "5e763e9e-23c0-4a7b-aac3-43cd67ba201f" (UID: "5e763e9e-23c0-4a7b-aac3-43cd67ba201f"). InnerVolumeSpecName "kube-api-access-zsgx8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.336960 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsgx8\" (UniqueName: \"kubernetes.io/projected/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-kube-api-access-zsgx8\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.337006 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.337019 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e763e9e-23c0-4a7b-aac3-43cd67ba201f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.891685 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" event={"ID":"5e763e9e-23c0-4a7b-aac3-43cd67ba201f","Type":"ContainerDied","Data":"0449ca80833fca435b24a1fdd959a74531035bb7930922b3c64ee99c3296798f"} Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.891746 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458095-nw6fb" Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.891755 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0449ca80833fca435b24a1fdd959a74531035bb7930922b3c64ee99c3296798f" Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.893397 5108 generic.go:358] "Generic (PLEG): container finished" podID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerID="70a9bf32fb08c2500814857c3777f4739582b8acee4b984a1e2bd55f7693707b" exitCode=0 Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.893507 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" event={"ID":"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0","Type":"ContainerDied","Data":"70a9bf32fb08c2500814857c3777f4739582b8acee4b984a1e2bd55f7693707b"} Jan 04 00:15:03 crc kubenswrapper[5108]: I0104 00:15:03.893935 5108 scope.go:117] "RemoveContainer" containerID="70a9bf32fb08c2500814857c3777f4739582b8acee4b984a1e2bd55f7693707b" Jan 04 00:15:04 crc kubenswrapper[5108]: I0104 00:15:04.905187 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" event={"ID":"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0","Type":"ContainerStarted","Data":"049d38d5c84e461c95c0efcff72005df42fd1ac850c9ee1f26eadf0c2e7c6f7d"} Jan 04 00:15:04 crc kubenswrapper[5108]: I0104 00:15:04.905988 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:15:04 crc kubenswrapper[5108]: I0104 00:15:04.907824 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:15:07 crc kubenswrapper[5108]: I0104 00:15:07.741071 5108 ???:1] "http: TLS handshake error from 192.168.126.11:40716: no serving certificate available for the kubelet" Jan 04 00:15:07 crc kubenswrapper[5108]: I0104 00:15:07.754549 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 04 00:15:13 crc kubenswrapper[5108]: I0104 00:15:13.093043 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 04 00:15:16 crc kubenswrapper[5108]: I0104 00:15:16.904112 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 04 00:15:18 crc kubenswrapper[5108]: I0104 00:15:18.945474 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 04 00:15:21 crc kubenswrapper[5108]: I0104 00:15:21.115507 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 04 00:15:22 crc kubenswrapper[5108]: I0104 00:15:22.258358 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 04 00:15:24 crc kubenswrapper[5108]: I0104 00:15:24.917294 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:15:24 crc kubenswrapper[5108]: I0104 00:15:24.917472 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:15:24 crc kubenswrapper[5108]: I0104 00:15:24.917557 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:15:24 crc kubenswrapper[5108]: I0104 00:15:24.918467 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94f4e2cbc916293b4e6676fb0b3fe4568b76f062b4ce243281ad611c1958954a"} pod="openshift-machine-config-operator/machine-config-daemon-njl5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 04 00:15:24 crc kubenswrapper[5108]: I0104 00:15:24.918574 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" containerID="cri-o://94f4e2cbc916293b4e6676fb0b3fe4568b76f062b4ce243281ad611c1958954a" gracePeriod=600 Jan 04 00:15:26 crc kubenswrapper[5108]: I0104 00:15:26.041704 5108 generic.go:358] "Generic (PLEG): container finished" podID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerID="94f4e2cbc916293b4e6676fb0b3fe4568b76f062b4ce243281ad611c1958954a" exitCode=0 Jan 04 00:15:26 crc kubenswrapper[5108]: I0104 00:15:26.041823 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerDied","Data":"94f4e2cbc916293b4e6676fb0b3fe4568b76f062b4ce243281ad611c1958954a"} Jan 04 00:15:26 crc kubenswrapper[5108]: I0104 00:15:26.042622 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"98c0ce6db2062cf99e2e7a19595c98fef731421d446df51d11c001f56a4c3cd2"} Jan 04 00:15:26 crc kubenswrapper[5108]: I0104 00:15:26.649812 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:15:26 crc kubenswrapper[5108]: I0104 00:15:26.650140 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:15:29 crc kubenswrapper[5108]: I0104 00:15:29.062795 5108 generic.go:358] "Generic (PLEG): container finished" podID="52146c21-3246-4f94-b1ac-d912a24401ab" containerID="d6b836251db41e1dbad061050dc4cff7f1fea69385f48cb05b14c8335f0fae9e" exitCode=0 Jan 04 00:15:29 crc kubenswrapper[5108]: I0104 00:15:29.062978 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29458080-vx5nr" event={"ID":"52146c21-3246-4f94-b1ac-d912a24401ab","Type":"ContainerDied","Data":"d6b836251db41e1dbad061050dc4cff7f1fea69385f48cb05b14c8335f0fae9e"} Jan 04 00:15:30 crc kubenswrapper[5108]: I0104 00:15:30.353332 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:15:30 crc kubenswrapper[5108]: I0104 00:15:30.403272 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/52146c21-3246-4f94-b1ac-d912a24401ab-serviceca\") pod \"52146c21-3246-4f94-b1ac-d912a24401ab\" (UID: \"52146c21-3246-4f94-b1ac-d912a24401ab\") " Jan 04 00:15:30 crc kubenswrapper[5108]: I0104 00:15:30.403538 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6t8v\" (UniqueName: \"kubernetes.io/projected/52146c21-3246-4f94-b1ac-d912a24401ab-kube-api-access-p6t8v\") pod \"52146c21-3246-4f94-b1ac-d912a24401ab\" (UID: \"52146c21-3246-4f94-b1ac-d912a24401ab\") " Jan 04 00:15:30 crc kubenswrapper[5108]: I0104 00:15:30.404497 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52146c21-3246-4f94-b1ac-d912a24401ab-serviceca" (OuterVolumeSpecName: "serviceca") pod "52146c21-3246-4f94-b1ac-d912a24401ab" (UID: "52146c21-3246-4f94-b1ac-d912a24401ab"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:15:30 crc kubenswrapper[5108]: I0104 00:15:30.413467 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52146c21-3246-4f94-b1ac-d912a24401ab-kube-api-access-p6t8v" (OuterVolumeSpecName: "kube-api-access-p6t8v") pod "52146c21-3246-4f94-b1ac-d912a24401ab" (UID: "52146c21-3246-4f94-b1ac-d912a24401ab"). InnerVolumeSpecName "kube-api-access-p6t8v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:15:30 crc kubenswrapper[5108]: I0104 00:15:30.504921 5108 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/52146c21-3246-4f94-b1ac-d912a24401ab-serviceca\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:30 crc kubenswrapper[5108]: I0104 00:15:30.504971 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p6t8v\" (UniqueName: \"kubernetes.io/projected/52146c21-3246-4f94-b1ac-d912a24401ab-kube-api-access-p6t8v\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:31 crc kubenswrapper[5108]: I0104 00:15:31.077226 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29458080-vx5nr" event={"ID":"52146c21-3246-4f94-b1ac-d912a24401ab","Type":"ContainerDied","Data":"eb2d938b22970aca7c792c0c2ea37c98ffc21de5bbfcf9fcba0ce3f60d03a92f"} Jan 04 00:15:31 crc kubenswrapper[5108]: I0104 00:15:31.077319 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb2d938b22970aca7c792c0c2ea37c98ffc21de5bbfcf9fcba0ce3f60d03a92f" Jan 04 00:15:31 crc kubenswrapper[5108]: I0104 00:15:31.077250 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29458080-vx5nr" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.396010 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pppml"] Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.396944 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" podUID="4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" containerName="controller-manager" containerID="cri-o://f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14" gracePeriod=30 Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.410709 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh"] Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.411090 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" podUID="af85dc64-1599-4534-8cc4-be005c8893c3" containerName="route-controller-manager" containerID="cri-o://019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811" gracePeriod=30 Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.801430 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.839657 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56889494dc-qk5jl"] Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.842695 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" containerName="controller-manager" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.842903 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" containerName="controller-manager" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.843026 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52146c21-3246-4f94-b1ac-d912a24401ab" containerName="image-pruner" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.843146 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="52146c21-3246-4f94-b1ac-d912a24401ab" containerName="image-pruner" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.843262 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e763e9e-23c0-4a7b-aac3-43cd67ba201f" containerName="collect-profiles" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.843414 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e763e9e-23c0-4a7b-aac3-43cd67ba201f" containerName="collect-profiles" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.843733 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="52146c21-3246-4f94-b1ac-d912a24401ab" containerName="image-pruner" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.843886 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" containerName="controller-manager" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.843993 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e763e9e-23c0-4a7b-aac3-43cd67ba201f" containerName="collect-profiles" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.849531 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.856932 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56889494dc-qk5jl"] Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.863895 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889373 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-client-ca\") pod \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889456 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkk4w\" (UniqueName: \"kubernetes.io/projected/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-kube-api-access-zkk4w\") pod \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889558 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-proxy-ca-bundles\") pod \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889619 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-serving-cert\") pod \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889650 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-tmp\") pod \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889677 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-config\") pod \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\" (UID: \"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889831 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-proxy-ca-bundles\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889865 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-client-ca\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889897 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-config\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889929 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30960336-2557-4349-80ec-8c4809a42b0f-serving-cert\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.889989 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqrv\" (UniqueName: \"kubernetes.io/projected/30960336-2557-4349-80ec-8c4809a42b0f-kube-api-access-xhqrv\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.890014 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/30960336-2557-4349-80ec-8c4809a42b0f-tmp\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.890982 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-tmp" (OuterVolumeSpecName: "tmp") pod "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" (UID: "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.890971 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" (UID: "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.891544 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-config" (OuterVolumeSpecName: "config") pod "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" (UID: "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.891922 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-client-ca" (OuterVolumeSpecName: "client-ca") pod "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" (UID: "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.898286 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-kube-api-access-zkk4w" (OuterVolumeSpecName: "kube-api-access-zkk4w") pod "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" (UID: "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b"). InnerVolumeSpecName "kube-api-access-zkk4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.906528 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" (UID: "4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.914164 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl"] Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.915154 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af85dc64-1599-4534-8cc4-be005c8893c3" containerName="route-controller-manager" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.915223 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="af85dc64-1599-4534-8cc4-be005c8893c3" containerName="route-controller-manager" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.915385 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="af85dc64-1599-4534-8cc4-be005c8893c3" containerName="route-controller-manager" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.923664 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.924642 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl"] Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.990763 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-config\") pod \"af85dc64-1599-4534-8cc4-be005c8893c3\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.990846 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-client-ca\") pod \"af85dc64-1599-4534-8cc4-be005c8893c3\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.990919 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vpwq\" (UniqueName: \"kubernetes.io/projected/af85dc64-1599-4534-8cc4-be005c8893c3-kube-api-access-4vpwq\") pod \"af85dc64-1599-4534-8cc4-be005c8893c3\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.990961 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af85dc64-1599-4534-8cc4-be005c8893c3-serving-cert\") pod \"af85dc64-1599-4534-8cc4-be005c8893c3\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991026 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af85dc64-1599-4534-8cc4-be005c8893c3-tmp\") pod \"af85dc64-1599-4534-8cc4-be005c8893c3\" (UID: \"af85dc64-1599-4534-8cc4-be005c8893c3\") " Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991172 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/30960336-2557-4349-80ec-8c4809a42b0f-tmp\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991231 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs246\" (UniqueName: \"kubernetes.io/projected/1f140041-f9d1-4e70-8042-807b521356ea-kube-api-access-bs246\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991257 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f140041-f9d1-4e70-8042-807b521356ea-serving-cert\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991281 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-proxy-ca-bundles\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991307 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-client-ca\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991341 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-config\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991369 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30960336-2557-4349-80ec-8c4809a42b0f-serving-cert\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991390 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f140041-f9d1-4e70-8042-807b521356ea-tmp\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991430 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-config\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991686 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-config" (OuterVolumeSpecName: "config") pod "af85dc64-1599-4534-8cc4-be005c8893c3" (UID: "af85dc64-1599-4534-8cc4-be005c8893c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991829 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xhqrv\" (UniqueName: \"kubernetes.io/projected/30960336-2557-4349-80ec-8c4809a42b0f-kube-api-access-xhqrv\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991861 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-client-ca\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991931 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.991965 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.992167 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-client-ca" (OuterVolumeSpecName: "client-ca") pod "af85dc64-1599-4534-8cc4-be005c8893c3" (UID: "af85dc64-1599-4534-8cc4-be005c8893c3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.992975 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.993279 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-proxy-ca-bundles\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.993432 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkk4w\" (UniqueName: \"kubernetes.io/projected/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-kube-api-access-zkk4w\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.993480 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.993502 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.993519 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.993783 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/30960336-2557-4349-80ec-8c4809a42b0f-tmp\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.994058 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-client-ca\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.994172 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af85dc64-1599-4534-8cc4-be005c8893c3-tmp" (OuterVolumeSpecName: "tmp") pod "af85dc64-1599-4534-8cc4-be005c8893c3" (UID: "af85dc64-1599-4534-8cc4-be005c8893c3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.994471 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-config\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.996868 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af85dc64-1599-4534-8cc4-be005c8893c3-kube-api-access-4vpwq" (OuterVolumeSpecName: "kube-api-access-4vpwq") pod "af85dc64-1599-4534-8cc4-be005c8893c3" (UID: "af85dc64-1599-4534-8cc4-be005c8893c3"). InnerVolumeSpecName "kube-api-access-4vpwq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.997556 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30960336-2557-4349-80ec-8c4809a42b0f-serving-cert\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:35 crc kubenswrapper[5108]: I0104 00:15:35.998337 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af85dc64-1599-4534-8cc4-be005c8893c3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "af85dc64-1599-4534-8cc4-be005c8893c3" (UID: "af85dc64-1599-4534-8cc4-be005c8893c3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.011522 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhqrv\" (UniqueName: \"kubernetes.io/projected/30960336-2557-4349-80ec-8c4809a42b0f-kube-api-access-xhqrv\") pod \"controller-manager-56889494dc-qk5jl\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095157 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f140041-f9d1-4e70-8042-807b521356ea-tmp\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095264 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-config\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095301 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-client-ca\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095339 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bs246\" (UniqueName: \"kubernetes.io/projected/1f140041-f9d1-4e70-8042-807b521356ea-kube-api-access-bs246\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095374 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f140041-f9d1-4e70-8042-807b521356ea-serving-cert\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095439 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af85dc64-1599-4534-8cc4-be005c8893c3-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095556 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af85dc64-1599-4534-8cc4-be005c8893c3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095832 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4vpwq\" (UniqueName: \"kubernetes.io/projected/af85dc64-1599-4534-8cc4-be005c8893c3-kube-api-access-4vpwq\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095868 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f140041-f9d1-4e70-8042-807b521356ea-tmp\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.095909 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af85dc64-1599-4534-8cc4-be005c8893c3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.096782 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-client-ca\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.097560 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-config\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.100598 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f140041-f9d1-4e70-8042-807b521356ea-serving-cert\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.115747 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs246\" (UniqueName: \"kubernetes.io/projected/1f140041-f9d1-4e70-8042-807b521356ea-kube-api-access-bs246\") pod \"route-controller-manager-6f6bc495b9-7m5wl\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.119387 5108 generic.go:358] "Generic (PLEG): container finished" podID="4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" containerID="f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14" exitCode=0 Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.119516 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" event={"ID":"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b","Type":"ContainerDied","Data":"f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14"} Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.119558 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" event={"ID":"4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b","Type":"ContainerDied","Data":"50840408b1f20d99508f7074aff7d5636b278cd005db8632c8c04fc15714caff"} Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.119581 5108 scope.go:117] "RemoveContainer" containerID="f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.119580 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-pppml" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.122069 5108 generic.go:358] "Generic (PLEG): container finished" podID="af85dc64-1599-4534-8cc4-be005c8893c3" containerID="019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811" exitCode=0 Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.122150 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.122172 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" event={"ID":"af85dc64-1599-4534-8cc4-be005c8893c3","Type":"ContainerDied","Data":"019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811"} Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.122215 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh" event={"ID":"af85dc64-1599-4534-8cc4-be005c8893c3","Type":"ContainerDied","Data":"ea3ce2bbf87cf06f9e24dbf860bbfdb00c0c4b26fee413a69f508c97a812636b"} Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.146080 5108 scope.go:117] "RemoveContainer" containerID="f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14" Jan 04 00:15:36 crc kubenswrapper[5108]: E0104 00:15:36.146573 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14\": container with ID starting with f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14 not found: ID does not exist" containerID="f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.146624 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14"} err="failed to get container status \"f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14\": rpc error: code = NotFound desc = could not find container \"f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14\": container with ID starting with f85582411ebe356e741c362aa18aee7f5f9029e683dacc6fc669195e8a7bde14 not found: ID does not exist" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.146658 5108 scope.go:117] "RemoveContainer" containerID="019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.167794 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh"] Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.171656 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-52hzh"] Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.176849 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.178238 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pppml"] Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.179309 5108 scope.go:117] "RemoveContainer" containerID="019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811" Jan 04 00:15:36 crc kubenswrapper[5108]: E0104 00:15:36.181370 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811\": container with ID starting with 019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811 not found: ID does not exist" containerID="019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.181530 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811"} err="failed to get container status \"019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811\": rpc error: code = NotFound desc = could not find container \"019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811\": container with ID starting with 019d7185940e98632cec357f6c635150fde9692dd996a4a2247cb560ea44c811 not found: ID does not exist" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.182889 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-pppml"] Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.248220 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.385118 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56889494dc-qk5jl"] Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.394827 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.465773 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b" path="/var/lib/kubelet/pods/4f1bea3e-70d1-4a1d-af06-3da2b67e9d5b/volumes" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.466817 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af85dc64-1599-4534-8cc4-be005c8893c3" path="/var/lib/kubelet/pods/af85dc64-1599-4534-8cc4-be005c8893c3/volumes" Jan 04 00:15:36 crc kubenswrapper[5108]: I0104 00:15:36.469678 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl"] Jan 04 00:15:36 crc kubenswrapper[5108]: W0104 00:15:36.482091 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f140041_f9d1_4e70_8042_807b521356ea.slice/crio-d253dd43967a3c634454c650b3fe40ffa5250f954f6ff3acfc3fbf0b8ad95dbd WatchSource:0}: Error finding container d253dd43967a3c634454c650b3fe40ffa5250f954f6ff3acfc3fbf0b8ad95dbd: Status 404 returned error can't find the container with id d253dd43967a3c634454c650b3fe40ffa5250f954f6ff3acfc3fbf0b8ad95dbd Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.130941 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" event={"ID":"1f140041-f9d1-4e70-8042-807b521356ea","Type":"ContainerStarted","Data":"32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0"} Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.131522 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" event={"ID":"1f140041-f9d1-4e70-8042-807b521356ea","Type":"ContainerStarted","Data":"d253dd43967a3c634454c650b3fe40ffa5250f954f6ff3acfc3fbf0b8ad95dbd"} Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.131546 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.133236 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" event={"ID":"30960336-2557-4349-80ec-8c4809a42b0f","Type":"ContainerStarted","Data":"f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba"} Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.133307 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" event={"ID":"30960336-2557-4349-80ec-8c4809a42b0f","Type":"ContainerStarted","Data":"eb9de12c1b9ea2cb245faf9c16560904ec6d47757156941e720e8b387fb3da65"} Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.133517 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.146551 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.159492 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" podStartSLOduration=2.159468886 podStartE2EDuration="2.159468886s" podCreationTimestamp="2026-01-04 00:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:15:37.155319795 +0000 UTC m=+311.143884901" watchObservedRunningTime="2026-01-04 00:15:37.159468886 +0000 UTC m=+311.148033972" Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.325810 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:15:37 crc kubenswrapper[5108]: I0104 00:15:37.347297 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" podStartSLOduration=2.347267826 podStartE2EDuration="2.347267826s" podCreationTimestamp="2026-01-04 00:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:15:37.180858989 +0000 UTC m=+311.169424075" watchObservedRunningTime="2026-01-04 00:15:37.347267826 +0000 UTC m=+311.335832912" Jan 04 00:15:56 crc kubenswrapper[5108]: I0104 00:15:56.896951 5108 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 04 00:16:15 crc kubenswrapper[5108]: I0104 00:16:15.370419 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56889494dc-qk5jl"] Jan 04 00:16:15 crc kubenswrapper[5108]: I0104 00:16:15.371373 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" podUID="30960336-2557-4349-80ec-8c4809a42b0f" containerName="controller-manager" containerID="cri-o://f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba" gracePeriod=30 Jan 04 00:16:15 crc kubenswrapper[5108]: I0104 00:16:15.396993 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl"] Jan 04 00:16:15 crc kubenswrapper[5108]: I0104 00:16:15.397262 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" podUID="1f140041-f9d1-4e70-8042-807b521356ea" containerName="route-controller-manager" containerID="cri-o://32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0" gracePeriod=30 Jan 04 00:16:15 crc kubenswrapper[5108]: I0104 00:16:15.968010 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:16:15 crc kubenswrapper[5108]: I0104 00:16:15.997980 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6"] Jan 04 00:16:15 crc kubenswrapper[5108]: I0104 00:16:15.998571 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f140041-f9d1-4e70-8042-807b521356ea" containerName="route-controller-manager" Jan 04 00:16:15 crc kubenswrapper[5108]: I0104 00:16:15.998592 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f140041-f9d1-4e70-8042-807b521356ea" containerName="route-controller-manager" Jan 04 00:16:15 crc kubenswrapper[5108]: I0104 00:16:15.998687 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f140041-f9d1-4e70-8042-807b521356ea" containerName="route-controller-manager" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.004923 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.022903 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6"] Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027016 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-client-ca\") pod \"1f140041-f9d1-4e70-8042-807b521356ea\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027186 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs246\" (UniqueName: \"kubernetes.io/projected/1f140041-f9d1-4e70-8042-807b521356ea-kube-api-access-bs246\") pod \"1f140041-f9d1-4e70-8042-807b521356ea\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027274 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f140041-f9d1-4e70-8042-807b521356ea-tmp\") pod \"1f140041-f9d1-4e70-8042-807b521356ea\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027369 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-config\") pod \"1f140041-f9d1-4e70-8042-807b521356ea\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027409 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f140041-f9d1-4e70-8042-807b521356ea-serving-cert\") pod \"1f140041-f9d1-4e70-8042-807b521356ea\" (UID: \"1f140041-f9d1-4e70-8042-807b521356ea\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027631 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgmph\" (UniqueName: \"kubernetes.io/projected/5214f4a6-f1c4-4ec2-be93-c862403c4258-kube-api-access-mgmph\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027692 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5214f4a6-f1c4-4ec2-be93-c862403c4258-tmp\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027727 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5214f4a6-f1c4-4ec2-be93-c862403c4258-config\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027784 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f140041-f9d1-4e70-8042-807b521356ea-tmp" (OuterVolumeSpecName: "tmp") pod "1f140041-f9d1-4e70-8042-807b521356ea" (UID: "1f140041-f9d1-4e70-8042-807b521356ea"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027820 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5214f4a6-f1c4-4ec2-be93-c862403c4258-serving-cert\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027878 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5214f4a6-f1c4-4ec2-be93-c862403c4258-client-ca\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.027913 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f140041-f9d1-4e70-8042-807b521356ea" (UID: "1f140041-f9d1-4e70-8042-807b521356ea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.028039 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-config" (OuterVolumeSpecName: "config") pod "1f140041-f9d1-4e70-8042-807b521356ea" (UID: "1f140041-f9d1-4e70-8042-807b521356ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.028073 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-client-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.028158 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f140041-f9d1-4e70-8042-807b521356ea-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.035926 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f140041-f9d1-4e70-8042-807b521356ea-kube-api-access-bs246" (OuterVolumeSpecName: "kube-api-access-bs246") pod "1f140041-f9d1-4e70-8042-807b521356ea" (UID: "1f140041-f9d1-4e70-8042-807b521356ea"). InnerVolumeSpecName "kube-api-access-bs246". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.039838 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f140041-f9d1-4e70-8042-807b521356ea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f140041-f9d1-4e70-8042-807b521356ea" (UID: "1f140041-f9d1-4e70-8042-807b521356ea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.054723 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.086823 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7445c65ddc-zqpc4"] Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.087406 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30960336-2557-4349-80ec-8c4809a42b0f" containerName="controller-manager" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.087425 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="30960336-2557-4349-80ec-8c4809a42b0f" containerName="controller-manager" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.087546 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="30960336-2557-4349-80ec-8c4809a42b0f" containerName="controller-manager" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.097440 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.101903 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7445c65ddc-zqpc4"] Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.129813 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhqrv\" (UniqueName: \"kubernetes.io/projected/30960336-2557-4349-80ec-8c4809a42b0f-kube-api-access-xhqrv\") pod \"30960336-2557-4349-80ec-8c4809a42b0f\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.129860 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30960336-2557-4349-80ec-8c4809a42b0f-serving-cert\") pod \"30960336-2557-4349-80ec-8c4809a42b0f\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.130007 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/30960336-2557-4349-80ec-8c4809a42b0f-tmp\") pod \"30960336-2557-4349-80ec-8c4809a42b0f\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.130224 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-proxy-ca-bundles\") pod \"30960336-2557-4349-80ec-8c4809a42b0f\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.130333 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-client-ca\") pod \"30960336-2557-4349-80ec-8c4809a42b0f\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.130396 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-config\") pod \"30960336-2557-4349-80ec-8c4809a42b0f\" (UID: \"30960336-2557-4349-80ec-8c4809a42b0f\") " Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.130774 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30960336-2557-4349-80ec-8c4809a42b0f-tmp" (OuterVolumeSpecName: "tmp") pod "30960336-2557-4349-80ec-8c4809a42b0f" (UID: "30960336-2557-4349-80ec-8c4809a42b0f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.131218 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "30960336-2557-4349-80ec-8c4809a42b0f" (UID: "30960336-2557-4349-80ec-8c4809a42b0f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.131081 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "30960336-2557-4349-80ec-8c4809a42b0f" (UID: "30960336-2557-4349-80ec-8c4809a42b0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.131434 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-config" (OuterVolumeSpecName: "config") pod "30960336-2557-4349-80ec-8c4809a42b0f" (UID: "30960336-2557-4349-80ec-8c4809a42b0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.131770 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae8a2a96-84c4-48a3-8240-344bd9acfb74-config\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.131929 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5214f4a6-f1c4-4ec2-be93-c862403c4258-serving-cert\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.131966 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5214f4a6-f1c4-4ec2-be93-c862403c4258-client-ca\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132004 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ae8a2a96-84c4-48a3-8240-344bd9acfb74-tmp\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132039 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae8a2a96-84c4-48a3-8240-344bd9acfb74-serving-cert\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132080 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mgmph\" (UniqueName: \"kubernetes.io/projected/5214f4a6-f1c4-4ec2-be93-c862403c4258-kube-api-access-mgmph\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132134 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmrh\" (UniqueName: \"kubernetes.io/projected/ae8a2a96-84c4-48a3-8240-344bd9acfb74-kube-api-access-pgmrh\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132190 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5214f4a6-f1c4-4ec2-be93-c862403c4258-tmp\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132251 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5214f4a6-f1c4-4ec2-be93-c862403c4258-config\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132278 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8a2a96-84c4-48a3-8240-344bd9acfb74-client-ca\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132303 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae8a2a96-84c4-48a3-8240-344bd9acfb74-proxy-ca-bundles\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132375 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bs246\" (UniqueName: \"kubernetes.io/projected/1f140041-f9d1-4e70-8042-807b521356ea-kube-api-access-bs246\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132390 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/30960336-2557-4349-80ec-8c4809a42b0f-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132400 5108 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132413 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f140041-f9d1-4e70-8042-807b521356ea-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132423 5108 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132432 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f140041-f9d1-4e70-8042-807b521356ea-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.132444 5108 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30960336-2557-4349-80ec-8c4809a42b0f-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.134282 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5214f4a6-f1c4-4ec2-be93-c862403c4258-tmp\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.134392 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5214f4a6-f1c4-4ec2-be93-c862403c4258-client-ca\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.134918 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5214f4a6-f1c4-4ec2-be93-c862403c4258-config\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.137362 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30960336-2557-4349-80ec-8c4809a42b0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "30960336-2557-4349-80ec-8c4809a42b0f" (UID: "30960336-2557-4349-80ec-8c4809a42b0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.138316 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5214f4a6-f1c4-4ec2-be93-c862403c4258-serving-cert\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.141344 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30960336-2557-4349-80ec-8c4809a42b0f-kube-api-access-xhqrv" (OuterVolumeSpecName: "kube-api-access-xhqrv") pod "30960336-2557-4349-80ec-8c4809a42b0f" (UID: "30960336-2557-4349-80ec-8c4809a42b0f"). InnerVolumeSpecName "kube-api-access-xhqrv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.149382 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgmph\" (UniqueName: \"kubernetes.io/projected/5214f4a6-f1c4-4ec2-be93-c862403c4258-kube-api-access-mgmph\") pod \"route-controller-manager-85dd7d99d5-7g8t6\" (UID: \"5214f4a6-f1c4-4ec2-be93-c862403c4258\") " pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.233415 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ae8a2a96-84c4-48a3-8240-344bd9acfb74-tmp\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.233485 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae8a2a96-84c4-48a3-8240-344bd9acfb74-serving-cert\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.233529 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pgmrh\" (UniqueName: \"kubernetes.io/projected/ae8a2a96-84c4-48a3-8240-344bd9acfb74-kube-api-access-pgmrh\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.233573 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8a2a96-84c4-48a3-8240-344bd9acfb74-client-ca\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.233727 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae8a2a96-84c4-48a3-8240-344bd9acfb74-proxy-ca-bundles\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.233950 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae8a2a96-84c4-48a3-8240-344bd9acfb74-config\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.234110 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ae8a2a96-84c4-48a3-8240-344bd9acfb74-tmp\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.235417 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae8a2a96-84c4-48a3-8240-344bd9acfb74-config\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.236681 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae8a2a96-84c4-48a3-8240-344bd9acfb74-proxy-ca-bundles\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.237491 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xhqrv\" (UniqueName: \"kubernetes.io/projected/30960336-2557-4349-80ec-8c4809a42b0f-kube-api-access-xhqrv\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.237528 5108 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30960336-2557-4349-80ec-8c4809a42b0f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.238634 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8a2a96-84c4-48a3-8240-344bd9acfb74-client-ca\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.238837 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae8a2a96-84c4-48a3-8240-344bd9acfb74-serving-cert\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.262991 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgmrh\" (UniqueName: \"kubernetes.io/projected/ae8a2a96-84c4-48a3-8240-344bd9acfb74-kube-api-access-pgmrh\") pod \"controller-manager-7445c65ddc-zqpc4\" (UID: \"ae8a2a96-84c4-48a3-8240-344bd9acfb74\") " pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.351889 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.411672 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.427458 5108 generic.go:358] "Generic (PLEG): container finished" podID="30960336-2557-4349-80ec-8c4809a42b0f" containerID="f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba" exitCode=0 Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.427565 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.427601 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" event={"ID":"30960336-2557-4349-80ec-8c4809a42b0f","Type":"ContainerDied","Data":"f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba"} Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.427690 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56889494dc-qk5jl" event={"ID":"30960336-2557-4349-80ec-8c4809a42b0f","Type":"ContainerDied","Data":"eb9de12c1b9ea2cb245faf9c16560904ec6d47757156941e720e8b387fb3da65"} Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.427725 5108 scope.go:117] "RemoveContainer" containerID="f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.433017 5108 generic.go:358] "Generic (PLEG): container finished" podID="1f140041-f9d1-4e70-8042-807b521356ea" containerID="32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0" exitCode=0 Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.433180 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.433186 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" event={"ID":"1f140041-f9d1-4e70-8042-807b521356ea","Type":"ContainerDied","Data":"32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0"} Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.433330 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl" event={"ID":"1f140041-f9d1-4e70-8042-807b521356ea","Type":"ContainerDied","Data":"d253dd43967a3c634454c650b3fe40ffa5250f954f6ff3acfc3fbf0b8ad95dbd"} Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.464461 5108 scope.go:117] "RemoveContainer" containerID="f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba" Jan 04 00:16:16 crc kubenswrapper[5108]: E0104 00:16:16.465011 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba\": container with ID starting with f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba not found: ID does not exist" containerID="f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.465052 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba"} err="failed to get container status \"f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba\": rpc error: code = NotFound desc = could not find container \"f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba\": container with ID starting with f002a43aff16b1dc47456490b051e1210bf1f48fbd8740dfd41c852416a2daba not found: ID does not exist" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.465077 5108 scope.go:117] "RemoveContainer" containerID="32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.487188 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56889494dc-qk5jl"] Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.491754 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56889494dc-qk5jl"] Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.503753 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl"] Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.504413 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6bc495b9-7m5wl"] Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.513024 5108 scope.go:117] "RemoveContainer" containerID="32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0" Jan 04 00:16:16 crc kubenswrapper[5108]: E0104 00:16:16.513710 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0\": container with ID starting with 32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0 not found: ID does not exist" containerID="32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.513769 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0"} err="failed to get container status \"32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0\": rpc error: code = NotFound desc = could not find container \"32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0\": container with ID starting with 32d148743e72c3c95dd4cf1d93f7305e9169274d9a9418a124d942cc5c7e44e0 not found: ID does not exist" Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.631839 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6"] Jan 04 00:16:16 crc kubenswrapper[5108]: I0104 00:16:16.725897 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7445c65ddc-zqpc4"] Jan 04 00:16:17 crc kubenswrapper[5108]: I0104 00:16:17.444165 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" event={"ID":"ae8a2a96-84c4-48a3-8240-344bd9acfb74","Type":"ContainerStarted","Data":"106f347850f340a126208b0a10d67a9ac2666b06800f5ef0d27b8ef19cb2d240"} Jan 04 00:16:17 crc kubenswrapper[5108]: I0104 00:16:17.445083 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" event={"ID":"ae8a2a96-84c4-48a3-8240-344bd9acfb74","Type":"ContainerStarted","Data":"3aa6202edf76af1a93ff338702bfdd8c7bd11387c079325ae8f8629e44014bcf"} Jan 04 00:16:17 crc kubenswrapper[5108]: I0104 00:16:17.448630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" event={"ID":"5214f4a6-f1c4-4ec2-be93-c862403c4258","Type":"ContainerStarted","Data":"391f4658535d9d6cad8f28bec14cd452360f0bbb468f1d75d4a440e5e0675324"} Jan 04 00:16:17 crc kubenswrapper[5108]: I0104 00:16:17.448697 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:17 crc kubenswrapper[5108]: I0104 00:16:17.448716 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" event={"ID":"5214f4a6-f1c4-4ec2-be93-c862403c4258","Type":"ContainerStarted","Data":"e8a06c3552935698f943957719b659b41b947849bd13b03034ba05bf669d083b"} Jan 04 00:16:17 crc kubenswrapper[5108]: I0104 00:16:17.456252 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" Jan 04 00:16:17 crc kubenswrapper[5108]: I0104 00:16:17.466070 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" podStartSLOduration=2.466051904 podStartE2EDuration="2.466051904s" podCreationTimestamp="2026-01-04 00:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:16:17.465426207 +0000 UTC m=+351.453991313" watchObservedRunningTime="2026-01-04 00:16:17.466051904 +0000 UTC m=+351.454617000" Jan 04 00:16:17 crc kubenswrapper[5108]: I0104 00:16:17.489227 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85dd7d99d5-7g8t6" podStartSLOduration=2.489096007 podStartE2EDuration="2.489096007s" podCreationTimestamp="2026-01-04 00:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:16:17.488243454 +0000 UTC m=+351.476808570" watchObservedRunningTime="2026-01-04 00:16:17.489096007 +0000 UTC m=+351.477661103" Jan 04 00:16:18 crc kubenswrapper[5108]: I0104 00:16:18.466034 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f140041-f9d1-4e70-8042-807b521356ea" path="/var/lib/kubelet/pods/1f140041-f9d1-4e70-8042-807b521356ea/volumes" Jan 04 00:16:18 crc kubenswrapper[5108]: I0104 00:16:18.468852 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30960336-2557-4349-80ec-8c4809a42b0f" path="/var/lib/kubelet/pods/30960336-2557-4349-80ec-8c4809a42b0f/volumes" Jan 04 00:16:18 crc kubenswrapper[5108]: I0104 00:16:18.469911 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:18 crc kubenswrapper[5108]: I0104 00:16:18.500017 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7445c65ddc-zqpc4" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.279198 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9px8h"] Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.280498 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9px8h" podUID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerName="registry-server" containerID="cri-o://b225f03b112e0d22962553b298643dc88720ab004a92ff7255b581f99ff76315" gracePeriod=30 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.281997 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ff989"] Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.283831 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ff989" podUID="320a6eb9-3704-43c9-84b9-25580545ff50" containerName="registry-server" containerID="cri-o://005d7f1259ee87a5c48eb4c0760a251d9d8ac557b66c6068c09ffdcbf0fc9e7d" gracePeriod=30 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.299355 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-tptrl"] Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.299779 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" containerID="cri-o://049d38d5c84e461c95c0efcff72005df42fd1ac850c9ee1f26eadf0c2e7c6f7d" gracePeriod=30 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.322352 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28926"] Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.322933 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-28926" podUID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerName="registry-server" containerID="cri-o://1c94652f4eb48de437ab80613d6c6d88d7fc5730df4a2675ee1176295b319960" gracePeriod=30 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.329119 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-clk26"] Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.329502 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-clk26" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerName="registry-server" containerID="cri-o://642ee9c6d8e729c1462d0c8131f631802a34755fb66293268c620a1cd67c6176" gracePeriod=30 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.334846 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qbs7x"] Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.359655 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qbs7x"] Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.359946 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.493369 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a5a3358d-cb42-4f34-9746-87614c392fd0-tmp\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.493420 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh8p2\" (UniqueName: \"kubernetes.io/projected/a5a3358d-cb42-4f34-9746-87614c392fd0-kube-api-access-kh8p2\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.493460 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5a3358d-cb42-4f34-9746-87614c392fd0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.493745 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a5a3358d-cb42-4f34-9746-87614c392fd0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.588764 5108 generic.go:358] "Generic (PLEG): container finished" podID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerID="049d38d5c84e461c95c0efcff72005df42fd1ac850c9ee1f26eadf0c2e7c6f7d" exitCode=0 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.588864 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" event={"ID":"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0","Type":"ContainerDied","Data":"049d38d5c84e461c95c0efcff72005df42fd1ac850c9ee1f26eadf0c2e7c6f7d"} Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.588956 5108 scope.go:117] "RemoveContainer" containerID="70a9bf32fb08c2500814857c3777f4739582b8acee4b984a1e2bd55f7693707b" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.591184 5108 generic.go:358] "Generic (PLEG): container finished" podID="320a6eb9-3704-43c9-84b9-25580545ff50" containerID="005d7f1259ee87a5c48eb4c0760a251d9d8ac557b66c6068c09ffdcbf0fc9e7d" exitCode=0 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.591254 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff989" event={"ID":"320a6eb9-3704-43c9-84b9-25580545ff50","Type":"ContainerDied","Data":"005d7f1259ee87a5c48eb4c0760a251d9d8ac557b66c6068c09ffdcbf0fc9e7d"} Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.594774 5108 generic.go:358] "Generic (PLEG): container finished" podID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerID="1c94652f4eb48de437ab80613d6c6d88d7fc5730df4a2675ee1176295b319960" exitCode=0 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.594882 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28926" event={"ID":"59b92be9-237e-4252-9bbe-a71908afb6e9","Type":"ContainerDied","Data":"1c94652f4eb48de437ab80613d6c6d88d7fc5730df4a2675ee1176295b319960"} Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.594897 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a5a3358d-cb42-4f34-9746-87614c392fd0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.594975 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a5a3358d-cb42-4f34-9746-87614c392fd0-tmp\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.595008 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kh8p2\" (UniqueName: \"kubernetes.io/projected/a5a3358d-cb42-4f34-9746-87614c392fd0-kube-api-access-kh8p2\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.595054 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5a3358d-cb42-4f34-9746-87614c392fd0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.595766 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a5a3358d-cb42-4f34-9746-87614c392fd0-tmp\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.596603 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5a3358d-cb42-4f34-9746-87614c392fd0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.598228 5108 generic.go:358] "Generic (PLEG): container finished" podID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerID="b225f03b112e0d22962553b298643dc88720ab004a92ff7255b581f99ff76315" exitCode=0 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.598302 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9px8h" event={"ID":"a762f8cf-a77d-477e-8141-1bb1e02d8744","Type":"ContainerDied","Data":"b225f03b112e0d22962553b298643dc88720ab004a92ff7255b581f99ff76315"} Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.606383 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a5a3358d-cb42-4f34-9746-87614c392fd0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.615119 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh8p2\" (UniqueName: \"kubernetes.io/projected/a5a3358d-cb42-4f34-9746-87614c392fd0-kube-api-access-kh8p2\") pod \"marketplace-operator-547dbd544d-qbs7x\" (UID: \"a5a3358d-cb42-4f34-9746-87614c392fd0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.617747 5108 generic.go:358] "Generic (PLEG): container finished" podID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerID="642ee9c6d8e729c1462d0c8131f631802a34755fb66293268c620a1cd67c6176" exitCode=0 Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.617835 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clk26" event={"ID":"1aa34c52-ea52-42e1-a7b1-a6f22e32642b","Type":"ContainerDied","Data":"642ee9c6d8e729c1462d0c8131f631802a34755fb66293268c620a1cd67c6176"} Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.677464 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.875962 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.960794 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.967393 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.983738 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:16:37 crc kubenswrapper[5108]: I0104 00:16:37.991907 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff989" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.010425 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-tmp\") pod \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.010625 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69l4r\" (UniqueName: \"kubernetes.io/projected/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-kube-api-access-69l4r\") pod \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.010763 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-operator-metrics\") pod \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.010831 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-trusted-ca\") pod \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\" (UID: \"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.011085 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-tmp" (OuterVolumeSpecName: "tmp") pod "e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" (UID: "e4e24d8d-dee7-4fe9-a832-8ff4983abbb0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.012724 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" (UID: "e4e24d8d-dee7-4fe9-a832-8ff4983abbb0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.013325 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.013356 5108 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-tmp\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.017602 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" (UID: "e4e24d8d-dee7-4fe9-a832-8ff4983abbb0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.020169 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-kube-api-access-69l4r" (OuterVolumeSpecName: "kube-api-access-69l4r") pod "e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" (UID: "e4e24d8d-dee7-4fe9-a832-8ff4983abbb0"). InnerVolumeSpecName "kube-api-access-69l4r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.114439 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-catalog-content\") pod \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.114549 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-utilities\") pod \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.114662 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-utilities\") pod \"59b92be9-237e-4252-9bbe-a71908afb6e9\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.115520 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-utilities" (OuterVolumeSpecName: "utilities") pod "1aa34c52-ea52-42e1-a7b1-a6f22e32642b" (UID: "1aa34c52-ea52-42e1-a7b1-a6f22e32642b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.115632 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-utilities" (OuterVolumeSpecName: "utilities") pod "59b92be9-237e-4252-9bbe-a71908afb6e9" (UID: "59b92be9-237e-4252-9bbe-a71908afb6e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.115692 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-catalog-content\") pod \"320a6eb9-3704-43c9-84b9-25580545ff50\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.132282 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-utilities\") pod \"320a6eb9-3704-43c9-84b9-25580545ff50\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.132436 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-utilities\") pod \"a762f8cf-a77d-477e-8141-1bb1e02d8744\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.132499 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6pj5\" (UniqueName: \"kubernetes.io/projected/59b92be9-237e-4252-9bbe-a71908afb6e9-kube-api-access-v6pj5\") pod \"59b92be9-237e-4252-9bbe-a71908afb6e9\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.133023 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-catalog-content\") pod \"59b92be9-237e-4252-9bbe-a71908afb6e9\" (UID: \"59b92be9-237e-4252-9bbe-a71908afb6e9\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.133063 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbbh\" (UniqueName: \"kubernetes.io/projected/a762f8cf-a77d-477e-8141-1bb1e02d8744-kube-api-access-cfbbh\") pod \"a762f8cf-a77d-477e-8141-1bb1e02d8744\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.133158 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dng69\" (UniqueName: \"kubernetes.io/projected/320a6eb9-3704-43c9-84b9-25580545ff50-kube-api-access-dng69\") pod \"320a6eb9-3704-43c9-84b9-25580545ff50\" (UID: \"320a6eb9-3704-43c9-84b9-25580545ff50\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.133191 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-catalog-content\") pod \"a762f8cf-a77d-477e-8141-1bb1e02d8744\" (UID: \"a762f8cf-a77d-477e-8141-1bb1e02d8744\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.133233 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpx6b\" (UniqueName: \"kubernetes.io/projected/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-kube-api-access-bpx6b\") pod \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\" (UID: \"1aa34c52-ea52-42e1-a7b1-a6f22e32642b\") " Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.133627 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-utilities" (OuterVolumeSpecName: "utilities") pod "a762f8cf-a77d-477e-8141-1bb1e02d8744" (UID: "a762f8cf-a77d-477e-8141-1bb1e02d8744"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.134090 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.134133 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.134149 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-69l4r\" (UniqueName: \"kubernetes.io/projected/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-kube-api-access-69l4r\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.134161 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.134171 5108 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.136123 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b92be9-237e-4252-9bbe-a71908afb6e9-kube-api-access-v6pj5" (OuterVolumeSpecName: "kube-api-access-v6pj5") pod "59b92be9-237e-4252-9bbe-a71908afb6e9" (UID: "59b92be9-237e-4252-9bbe-a71908afb6e9"). InnerVolumeSpecName "kube-api-access-v6pj5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.137020 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320a6eb9-3704-43c9-84b9-25580545ff50-kube-api-access-dng69" (OuterVolumeSpecName: "kube-api-access-dng69") pod "320a6eb9-3704-43c9-84b9-25580545ff50" (UID: "320a6eb9-3704-43c9-84b9-25580545ff50"). InnerVolumeSpecName "kube-api-access-dng69". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.138995 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-kube-api-access-bpx6b" (OuterVolumeSpecName: "kube-api-access-bpx6b") pod "1aa34c52-ea52-42e1-a7b1-a6f22e32642b" (UID: "1aa34c52-ea52-42e1-a7b1-a6f22e32642b"). InnerVolumeSpecName "kube-api-access-bpx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.139363 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-utilities" (OuterVolumeSpecName: "utilities") pod "320a6eb9-3704-43c9-84b9-25580545ff50" (UID: "320a6eb9-3704-43c9-84b9-25580545ff50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.142545 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a762f8cf-a77d-477e-8141-1bb1e02d8744-kube-api-access-cfbbh" (OuterVolumeSpecName: "kube-api-access-cfbbh") pod "a762f8cf-a77d-477e-8141-1bb1e02d8744" (UID: "a762f8cf-a77d-477e-8141-1bb1e02d8744"). InnerVolumeSpecName "kube-api-access-cfbbh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.154888 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59b92be9-237e-4252-9bbe-a71908afb6e9" (UID: "59b92be9-237e-4252-9bbe-a71908afb6e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.179991 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a762f8cf-a77d-477e-8141-1bb1e02d8744" (UID: "a762f8cf-a77d-477e-8141-1bb1e02d8744"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.183793 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "320a6eb9-3704-43c9-84b9-25580545ff50" (UID: "320a6eb9-3704-43c9-84b9-25580545ff50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.235077 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dng69\" (UniqueName: \"kubernetes.io/projected/320a6eb9-3704-43c9-84b9-25580545ff50-kube-api-access-dng69\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.235614 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a762f8cf-a77d-477e-8141-1bb1e02d8744-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.235626 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bpx6b\" (UniqueName: \"kubernetes.io/projected/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-kube-api-access-bpx6b\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.235635 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.235644 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320a6eb9-3704-43c9-84b9-25580545ff50-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.235660 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v6pj5\" (UniqueName: \"kubernetes.io/projected/59b92be9-237e-4252-9bbe-a71908afb6e9-kube-api-access-v6pj5\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.235679 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b92be9-237e-4252-9bbe-a71908afb6e9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.235691 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cfbbh\" (UniqueName: \"kubernetes.io/projected/a762f8cf-a77d-477e-8141-1bb1e02d8744-kube-api-access-cfbbh\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.252823 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1aa34c52-ea52-42e1-a7b1-a6f22e32642b" (UID: "1aa34c52-ea52-42e1-a7b1-a6f22e32642b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.327736 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qbs7x"] Jan 04 00:16:38 crc kubenswrapper[5108]: W0104 00:16:38.333530 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5a3358d_cb42_4f34_9746_87614c392fd0.slice/crio-a46b9e6e4c4f4c82c6207fc6c6d850d8760d0a27b595afae31fb1ee04407a89b WatchSource:0}: Error finding container a46b9e6e4c4f4c82c6207fc6c6d850d8760d0a27b595afae31fb1ee04407a89b: Status 404 returned error can't find the container with id a46b9e6e4c4f4c82c6207fc6c6d850d8760d0a27b595afae31fb1ee04407a89b Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.336690 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aa34c52-ea52-42e1-a7b1-a6f22e32642b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.629037 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff989" event={"ID":"320a6eb9-3704-43c9-84b9-25580545ff50","Type":"ContainerDied","Data":"a00db4ce7d726050aa2830753c8585884469ef6aaf94809b9e71f1711279436e"} Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.629112 5108 scope.go:117] "RemoveContainer" containerID="005d7f1259ee87a5c48eb4c0760a251d9d8ac557b66c6068c09ffdcbf0fc9e7d" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.629212 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff989" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.633490 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28926" event={"ID":"59b92be9-237e-4252-9bbe-a71908afb6e9","Type":"ContainerDied","Data":"16d1e9a58054623ac50b41cccb3a04588806f536689383f8cfe4b6bdbbe50b36"} Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.633720 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28926" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.642872 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9px8h" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.643186 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9px8h" event={"ID":"a762f8cf-a77d-477e-8141-1bb1e02d8744","Type":"ContainerDied","Data":"31f2336c22471a37bf881fc0d187124f25fdea36778b6c181cd7655b66138e00"} Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.647453 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clk26" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.647712 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clk26" event={"ID":"1aa34c52-ea52-42e1-a7b1-a6f22e32642b","Type":"ContainerDied","Data":"4b5f861601d1bb512fd16d58e941f6fd63c1a48559fe02019b46cea0f2bed4a6"} Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.654260 5108 scope.go:117] "RemoveContainer" containerID="88e7f0f780f8d738e221d255104188c91c0c16e6b4911749f1beff44e3ef308f" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.657952 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" event={"ID":"a5a3358d-cb42-4f34-9746-87614c392fd0","Type":"ContainerStarted","Data":"63a4b3768eec3fd217ca644ad81b87a7eff12566968a8e86000ffe4d9e77ccc4"} Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.658025 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" event={"ID":"a5a3358d-cb42-4f34-9746-87614c392fd0","Type":"ContainerStarted","Data":"a46b9e6e4c4f4c82c6207fc6c6d850d8760d0a27b595afae31fb1ee04407a89b"} Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.658052 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.659820 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ff989"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.661550 5108 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-qbs7x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.661679 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" podUID="a5a3358d-cb42-4f34-9746-87614c392fd0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.664055 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" event={"ID":"e4e24d8d-dee7-4fe9-a832-8ff4983abbb0","Type":"ContainerDied","Data":"df02298865ea31c8fc9e93a53765420fb37b632ea15cd0ac85d12fc8326ba1e2"} Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.664222 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-tptrl" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.666710 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ff989"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.681890 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-clk26"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.692330 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-clk26"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.694357 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9px8h"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.695762 5108 scope.go:117] "RemoveContainer" containerID="2498dbcf829a4273cdf43954e9afe8c54f16f260eab393f1c3f171f0dbfd275d" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.697000 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9px8h"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.705747 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" podStartSLOduration=1.705725022 podStartE2EDuration="1.705725022s" podCreationTimestamp="2026-01-04 00:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:16:38.704672573 +0000 UTC m=+372.693237669" watchObservedRunningTime="2026-01-04 00:16:38.705725022 +0000 UTC m=+372.694290118" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.726253 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28926"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.727856 5108 scope.go:117] "RemoveContainer" containerID="1c94652f4eb48de437ab80613d6c6d88d7fc5730df4a2675ee1176295b319960" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.734634 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-28926"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.744417 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-tptrl"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.745453 5108 scope.go:117] "RemoveContainer" containerID="fd0246f8c2b5444e71df9baf14add9a0cc95e817dcdd6f0c8dc48ba6ff041866" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.748728 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-tptrl"] Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.765324 5108 scope.go:117] "RemoveContainer" containerID="5ba6dc9847b2151ddb21ea830f1493f817217bf71ad71bc20c223d01fdb83e06" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.784395 5108 scope.go:117] "RemoveContainer" containerID="b225f03b112e0d22962553b298643dc88720ab004a92ff7255b581f99ff76315" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.800886 5108 scope.go:117] "RemoveContainer" containerID="baee8ea5e4bf3524f6dc574001d38454d061cda8e6f6c1b44ad4e76fd7314bf9" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.825462 5108 scope.go:117] "RemoveContainer" containerID="3ae8f7ea05b6e70896de33871a27a0220359dd318d363fce9c4b2dad444454f6" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.845278 5108 scope.go:117] "RemoveContainer" containerID="642ee9c6d8e729c1462d0c8131f631802a34755fb66293268c620a1cd67c6176" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.867112 5108 scope.go:117] "RemoveContainer" containerID="a70dfe643b272d7f9dc01ec7b36f343f620526134afae5d30a766c5cf3270870" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.892476 5108 scope.go:117] "RemoveContainer" containerID="53e38fcca4e479daf86ba5adabe7a76543efdb7feffc17bde30b53d3dcd9c0f1" Jan 04 00:16:38 crc kubenswrapper[5108]: I0104 00:16:38.906047 5108 scope.go:117] "RemoveContainer" containerID="049d38d5c84e461c95c0efcff72005df42fd1ac850c9ee1f26eadf0c2e7c6f7d" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.511432 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zbq58"] Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512603 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512645 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512666 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512681 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512699 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerName="extract-content" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512710 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerName="extract-content" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512738 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512750 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512769 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="320a6eb9-3704-43c9-84b9-25580545ff50" containerName="extract-utilities" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512781 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="320a6eb9-3704-43c9-84b9-25580545ff50" containerName="extract-utilities" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512797 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerName="extract-utilities" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512809 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerName="extract-utilities" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512826 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerName="extract-utilities" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512840 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerName="extract-utilities" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512862 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512873 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512892 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerName="extract-utilities" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512902 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerName="extract-utilities" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512917 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerName="extract-content" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512931 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerName="extract-content" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512961 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerName="extract-content" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.512986 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerName="extract-content" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513020 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="320a6eb9-3704-43c9-84b9-25580545ff50" containerName="extract-content" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513038 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="320a6eb9-3704-43c9-84b9-25580545ff50" containerName="extract-content" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513070 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="320a6eb9-3704-43c9-84b9-25580545ff50" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513083 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="320a6eb9-3704-43c9-84b9-25580545ff50" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513292 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513318 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="59b92be9-237e-4252-9bbe-a71908afb6e9" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513335 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="a762f8cf-a77d-477e-8141-1bb1e02d8744" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513351 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513366 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="320a6eb9-3704-43c9-84b9-25580545ff50" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513388 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" containerName="registry-server" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513594 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.513611 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" containerName="marketplace-operator" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.525579 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbq58"] Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.525857 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.529752 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.655009 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5sjg\" (UniqueName: \"kubernetes.io/projected/3f0916ca-f3c6-4a23-add3-1dcede582a7e-kube-api-access-t5sjg\") pod \"redhat-marketplace-zbq58\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.655093 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-catalog-content\") pod \"redhat-marketplace-zbq58\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.655336 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-utilities\") pod \"redhat-marketplace-zbq58\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.687774 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qbs7x" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.698928 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w78tn"] Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.706498 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.706775 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w78tn"] Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.712575 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.756760 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-catalog-content\") pod \"redhat-marketplace-zbq58\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.756868 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-utilities\") pod \"redhat-marketplace-zbq58\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.756987 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t5sjg\" (UniqueName: \"kubernetes.io/projected/3f0916ca-f3c6-4a23-add3-1dcede582a7e-kube-api-access-t5sjg\") pod \"redhat-marketplace-zbq58\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.757746 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-utilities\") pod \"redhat-marketplace-zbq58\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.757983 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-catalog-content\") pod \"redhat-marketplace-zbq58\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.785501 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5sjg\" (UniqueName: \"kubernetes.io/projected/3f0916ca-f3c6-4a23-add3-1dcede582a7e-kube-api-access-t5sjg\") pod \"redhat-marketplace-zbq58\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.851603 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.858134 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3037a115-bdce-4e65-b199-0b4aef54946f-utilities\") pod \"community-operators-w78tn\" (UID: \"3037a115-bdce-4e65-b199-0b4aef54946f\") " pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.858176 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsbbw\" (UniqueName: \"kubernetes.io/projected/3037a115-bdce-4e65-b199-0b4aef54946f-kube-api-access-gsbbw\") pod \"community-operators-w78tn\" (UID: \"3037a115-bdce-4e65-b199-0b4aef54946f\") " pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.858258 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3037a115-bdce-4e65-b199-0b4aef54946f-catalog-content\") pod \"community-operators-w78tn\" (UID: \"3037a115-bdce-4e65-b199-0b4aef54946f\") " pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.959840 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3037a115-bdce-4e65-b199-0b4aef54946f-utilities\") pod \"community-operators-w78tn\" (UID: \"3037a115-bdce-4e65-b199-0b4aef54946f\") " pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.960240 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsbbw\" (UniqueName: \"kubernetes.io/projected/3037a115-bdce-4e65-b199-0b4aef54946f-kube-api-access-gsbbw\") pod \"community-operators-w78tn\" (UID: \"3037a115-bdce-4e65-b199-0b4aef54946f\") " pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.960333 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3037a115-bdce-4e65-b199-0b4aef54946f-catalog-content\") pod \"community-operators-w78tn\" (UID: \"3037a115-bdce-4e65-b199-0b4aef54946f\") " pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.960954 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3037a115-bdce-4e65-b199-0b4aef54946f-catalog-content\") pod \"community-operators-w78tn\" (UID: \"3037a115-bdce-4e65-b199-0b4aef54946f\") " pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.961295 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3037a115-bdce-4e65-b199-0b4aef54946f-utilities\") pod \"community-operators-w78tn\" (UID: \"3037a115-bdce-4e65-b199-0b4aef54946f\") " pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:39 crc kubenswrapper[5108]: I0104 00:16:39.983580 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsbbw\" (UniqueName: \"kubernetes.io/projected/3037a115-bdce-4e65-b199-0b4aef54946f-kube-api-access-gsbbw\") pod \"community-operators-w78tn\" (UID: \"3037a115-bdce-4e65-b199-0b4aef54946f\") " pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.027947 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.319990 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbq58"] Jan 04 00:16:40 crc kubenswrapper[5108]: W0104 00:16:40.320729 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f0916ca_f3c6_4a23_add3_1dcede582a7e.slice/crio-9a299d62b9afbd3dc4919e677d215d30ab8f0e02cb33423c56fa133b3441cea8 WatchSource:0}: Error finding container 9a299d62b9afbd3dc4919e677d215d30ab8f0e02cb33423c56fa133b3441cea8: Status 404 returned error can't find the container with id 9a299d62b9afbd3dc4919e677d215d30ab8f0e02cb33423c56fa133b3441cea8 Jan 04 00:16:40 crc kubenswrapper[5108]: W0104 00:16:40.462439 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3037a115_bdce_4e65_b199_0b4aef54946f.slice/crio-3b5cdfffdb19adbd6e5af67b1b710c8d78d74c75ddb608cc7e9101bf0cb7bf76 WatchSource:0}: Error finding container 3b5cdfffdb19adbd6e5af67b1b710c8d78d74c75ddb608cc7e9101bf0cb7bf76: Status 404 returned error can't find the container with id 3b5cdfffdb19adbd6e5af67b1b710c8d78d74c75ddb608cc7e9101bf0cb7bf76 Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.464678 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa34c52-ea52-42e1-a7b1-a6f22e32642b" path="/var/lib/kubelet/pods/1aa34c52-ea52-42e1-a7b1-a6f22e32642b/volumes" Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.466373 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320a6eb9-3704-43c9-84b9-25580545ff50" path="/var/lib/kubelet/pods/320a6eb9-3704-43c9-84b9-25580545ff50/volumes" Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.467661 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b92be9-237e-4252-9bbe-a71908afb6e9" path="/var/lib/kubelet/pods/59b92be9-237e-4252-9bbe-a71908afb6e9/volumes" Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.470493 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a762f8cf-a77d-477e-8141-1bb1e02d8744" path="/var/lib/kubelet/pods/a762f8cf-a77d-477e-8141-1bb1e02d8744/volumes" Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.472722 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e24d8d-dee7-4fe9-a832-8ff4983abbb0" path="/var/lib/kubelet/pods/e4e24d8d-dee7-4fe9-a832-8ff4983abbb0/volumes" Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.473619 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w78tn"] Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.688841 5108 generic.go:358] "Generic (PLEG): container finished" podID="3037a115-bdce-4e65-b199-0b4aef54946f" containerID="e285cfe30773dd483723f71a667f324341223acf9360b3a56d8dd4996313a7f4" exitCode=0 Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.689010 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78tn" event={"ID":"3037a115-bdce-4e65-b199-0b4aef54946f","Type":"ContainerDied","Data":"e285cfe30773dd483723f71a667f324341223acf9360b3a56d8dd4996313a7f4"} Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.689050 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78tn" event={"ID":"3037a115-bdce-4e65-b199-0b4aef54946f","Type":"ContainerStarted","Data":"3b5cdfffdb19adbd6e5af67b1b710c8d78d74c75ddb608cc7e9101bf0cb7bf76"} Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.698697 5108 generic.go:358] "Generic (PLEG): container finished" podID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerID="36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824" exitCode=0 Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.700308 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbq58" event={"ID":"3f0916ca-f3c6-4a23-add3-1dcede582a7e","Type":"ContainerDied","Data":"36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824"} Jan 04 00:16:40 crc kubenswrapper[5108]: I0104 00:16:40.700342 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbq58" event={"ID":"3f0916ca-f3c6-4a23-add3-1dcede582a7e","Type":"ContainerStarted","Data":"9a299d62b9afbd3dc4919e677d215d30ab8f0e02cb33423c56fa133b3441cea8"} Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.710122 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78tn" event={"ID":"3037a115-bdce-4e65-b199-0b4aef54946f","Type":"ContainerStarted","Data":"98b8c413be153913d09e8b3cdc5c2e79c6ab3f0bb5a9f86cdbe48348be86399a"} Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.712875 5108 generic.go:358] "Generic (PLEG): container finished" podID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerID="ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01" exitCode=0 Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.713109 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbq58" event={"ID":"3f0916ca-f3c6-4a23-add3-1dcede582a7e","Type":"ContainerDied","Data":"ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01"} Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.901441 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kkx7t"] Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.908656 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.910246 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kkx7t"] Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.912934 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.999549 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7aa0e7e-b827-48db-b42e-bd862f760149-catalog-content\") pod \"certified-operators-kkx7t\" (UID: \"d7aa0e7e-b827-48db-b42e-bd862f760149\") " pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.999623 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7aa0e7e-b827-48db-b42e-bd862f760149-utilities\") pod \"certified-operators-kkx7t\" (UID: \"d7aa0e7e-b827-48db-b42e-bd862f760149\") " pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:41 crc kubenswrapper[5108]: I0104 00:16:41.999661 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn957\" (UniqueName: \"kubernetes.io/projected/d7aa0e7e-b827-48db-b42e-bd862f760149-kube-api-access-sn957\") pod \"certified-operators-kkx7t\" (UID: \"d7aa0e7e-b827-48db-b42e-bd862f760149\") " pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.095024 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-42qln"] Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.100794 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7aa0e7e-b827-48db-b42e-bd862f760149-catalog-content\") pod \"certified-operators-kkx7t\" (UID: \"d7aa0e7e-b827-48db-b42e-bd862f760149\") " pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.100863 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7aa0e7e-b827-48db-b42e-bd862f760149-utilities\") pod \"certified-operators-kkx7t\" (UID: \"d7aa0e7e-b827-48db-b42e-bd862f760149\") " pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.100903 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sn957\" (UniqueName: \"kubernetes.io/projected/d7aa0e7e-b827-48db-b42e-bd862f760149-kube-api-access-sn957\") pod \"certified-operators-kkx7t\" (UID: \"d7aa0e7e-b827-48db-b42e-bd862f760149\") " pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.101465 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7aa0e7e-b827-48db-b42e-bd862f760149-catalog-content\") pod \"certified-operators-kkx7t\" (UID: \"d7aa0e7e-b827-48db-b42e-bd862f760149\") " pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.101716 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7aa0e7e-b827-48db-b42e-bd862f760149-utilities\") pod \"certified-operators-kkx7t\" (UID: \"d7aa0e7e-b827-48db-b42e-bd862f760149\") " pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.106851 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.110925 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.116712 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42qln"] Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.133877 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn957\" (UniqueName: \"kubernetes.io/projected/d7aa0e7e-b827-48db-b42e-bd862f760149-kube-api-access-sn957\") pod \"certified-operators-kkx7t\" (UID: \"d7aa0e7e-b827-48db-b42e-bd862f760149\") " pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.201628 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fe17f4-5665-44f0-b006-7082ad6b29e7-utilities\") pod \"redhat-operators-42qln\" (UID: \"62fe17f4-5665-44f0-b006-7082ad6b29e7\") " pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.201673 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fe17f4-5665-44f0-b006-7082ad6b29e7-catalog-content\") pod \"redhat-operators-42qln\" (UID: \"62fe17f4-5665-44f0-b006-7082ad6b29e7\") " pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.201694 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hkm8\" (UniqueName: \"kubernetes.io/projected/62fe17f4-5665-44f0-b006-7082ad6b29e7-kube-api-access-8hkm8\") pod \"redhat-operators-42qln\" (UID: \"62fe17f4-5665-44f0-b006-7082ad6b29e7\") " pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.216726 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-jnvc5"] Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.224577 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.228564 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.238802 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-jnvc5"] Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303673 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6295587b-b532-4617-8753-39d7cac47227-registry-tls\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303744 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2lr9\" (UniqueName: \"kubernetes.io/projected/6295587b-b532-4617-8753-39d7cac47227-kube-api-access-b2lr9\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303784 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fe17f4-5665-44f0-b006-7082ad6b29e7-utilities\") pod \"redhat-operators-42qln\" (UID: \"62fe17f4-5665-44f0-b006-7082ad6b29e7\") " pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303804 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fe17f4-5665-44f0-b006-7082ad6b29e7-catalog-content\") pod \"redhat-operators-42qln\" (UID: \"62fe17f4-5665-44f0-b006-7082ad6b29e7\") " pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303819 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8hkm8\" (UniqueName: \"kubernetes.io/projected/62fe17f4-5665-44f0-b006-7082ad6b29e7-kube-api-access-8hkm8\") pod \"redhat-operators-42qln\" (UID: \"62fe17f4-5665-44f0-b006-7082ad6b29e7\") " pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303847 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6295587b-b532-4617-8753-39d7cac47227-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303879 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6295587b-b532-4617-8753-39d7cac47227-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303918 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6295587b-b532-4617-8753-39d7cac47227-registry-certificates\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303933 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6295587b-b532-4617-8753-39d7cac47227-bound-sa-token\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303957 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.303976 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6295587b-b532-4617-8753-39d7cac47227-trusted-ca\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.304522 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fe17f4-5665-44f0-b006-7082ad6b29e7-utilities\") pod \"redhat-operators-42qln\" (UID: \"62fe17f4-5665-44f0-b006-7082ad6b29e7\") " pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.304706 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fe17f4-5665-44f0-b006-7082ad6b29e7-catalog-content\") pod \"redhat-operators-42qln\" (UID: \"62fe17f4-5665-44f0-b006-7082ad6b29e7\") " pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.329132 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hkm8\" (UniqueName: \"kubernetes.io/projected/62fe17f4-5665-44f0-b006-7082ad6b29e7-kube-api-access-8hkm8\") pod \"redhat-operators-42qln\" (UID: \"62fe17f4-5665-44f0-b006-7082ad6b29e7\") " pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.333221 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.405009 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6295587b-b532-4617-8753-39d7cac47227-registry-certificates\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.405074 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6295587b-b532-4617-8753-39d7cac47227-bound-sa-token\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.405110 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6295587b-b532-4617-8753-39d7cac47227-trusted-ca\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.405143 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6295587b-b532-4617-8753-39d7cac47227-registry-tls\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.405179 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2lr9\" (UniqueName: \"kubernetes.io/projected/6295587b-b532-4617-8753-39d7cac47227-kube-api-access-b2lr9\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.405225 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6295587b-b532-4617-8753-39d7cac47227-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.405260 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6295587b-b532-4617-8753-39d7cac47227-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.406982 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6295587b-b532-4617-8753-39d7cac47227-registry-certificates\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.408071 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6295587b-b532-4617-8753-39d7cac47227-trusted-ca\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.411043 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6295587b-b532-4617-8753-39d7cac47227-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.414192 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6295587b-b532-4617-8753-39d7cac47227-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.420555 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.423744 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6295587b-b532-4617-8753-39d7cac47227-registry-tls\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.425675 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6295587b-b532-4617-8753-39d7cac47227-bound-sa-token\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.430229 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2lr9\" (UniqueName: \"kubernetes.io/projected/6295587b-b532-4617-8753-39d7cac47227-kube-api-access-b2lr9\") pod \"image-registry-5d9d95bf5b-jnvc5\" (UID: \"6295587b-b532-4617-8753-39d7cac47227\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.576414 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.662640 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kkx7t"] Jan 04 00:16:42 crc kubenswrapper[5108]: W0104 00:16:42.694655 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7aa0e7e_b827_48db_b42e_bd862f760149.slice/crio-03250a2214d7fe38a56293ba98b4550110a9d08379e08b84e64d832ae26fdf69 WatchSource:0}: Error finding container 03250a2214d7fe38a56293ba98b4550110a9d08379e08b84e64d832ae26fdf69: Status 404 returned error can't find the container with id 03250a2214d7fe38a56293ba98b4550110a9d08379e08b84e64d832ae26fdf69 Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.725564 5108 generic.go:358] "Generic (PLEG): container finished" podID="3037a115-bdce-4e65-b199-0b4aef54946f" containerID="98b8c413be153913d09e8b3cdc5c2e79c6ab3f0bb5a9f86cdbe48348be86399a" exitCode=0 Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.725654 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78tn" event={"ID":"3037a115-bdce-4e65-b199-0b4aef54946f","Type":"ContainerDied","Data":"98b8c413be153913d09e8b3cdc5c2e79c6ab3f0bb5a9f86cdbe48348be86399a"} Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.727112 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78tn" event={"ID":"3037a115-bdce-4e65-b199-0b4aef54946f","Type":"ContainerStarted","Data":"9814fc22f8fcd62e23dead0476ec375669c8c2d73df20100a5d0733a247fbdef"} Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.730338 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkx7t" event={"ID":"d7aa0e7e-b827-48db-b42e-bd862f760149","Type":"ContainerStarted","Data":"03250a2214d7fe38a56293ba98b4550110a9d08379e08b84e64d832ae26fdf69"} Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.738591 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbq58" event={"ID":"3f0916ca-f3c6-4a23-add3-1dcede582a7e","Type":"ContainerStarted","Data":"7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0"} Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.784755 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w78tn" podStartSLOduration=3.197675609 podStartE2EDuration="3.784730455s" podCreationTimestamp="2026-01-04 00:16:39 +0000 UTC" firstStartedPulling="2026-01-04 00:16:40.689954363 +0000 UTC m=+374.678519449" lastFinishedPulling="2026-01-04 00:16:41.277009199 +0000 UTC m=+375.265574295" observedRunningTime="2026-01-04 00:16:42.778591019 +0000 UTC m=+376.767156115" watchObservedRunningTime="2026-01-04 00:16:42.784730455 +0000 UTC m=+376.773295541" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.801837 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zbq58" podStartSLOduration=3.17065656 podStartE2EDuration="3.801819958s" podCreationTimestamp="2026-01-04 00:16:39 +0000 UTC" firstStartedPulling="2026-01-04 00:16:40.700147879 +0000 UTC m=+374.688712965" lastFinishedPulling="2026-01-04 00:16:41.331311277 +0000 UTC m=+375.319876363" observedRunningTime="2026-01-04 00:16:42.799997258 +0000 UTC m=+376.788562344" watchObservedRunningTime="2026-01-04 00:16:42.801819958 +0000 UTC m=+376.790385044" Jan 04 00:16:42 crc kubenswrapper[5108]: I0104 00:16:42.879697 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42qln"] Jan 04 00:16:42 crc kubenswrapper[5108]: W0104 00:16:42.888902 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62fe17f4_5665_44f0_b006_7082ad6b29e7.slice/crio-46f7d5801d92715947ab2622a047cf0e58185c31e14e42e37b214f147fcedb03 WatchSource:0}: Error finding container 46f7d5801d92715947ab2622a047cf0e58185c31e14e42e37b214f147fcedb03: Status 404 returned error can't find the container with id 46f7d5801d92715947ab2622a047cf0e58185c31e14e42e37b214f147fcedb03 Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.051023 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-jnvc5"] Jan 04 00:16:43 crc kubenswrapper[5108]: W0104 00:16:43.056066 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6295587b_b532_4617_8753_39d7cac47227.slice/crio-8f488cf47911624b2602307a803e61bfa6c0bb17b90db76b3695e13cb3c47c3d WatchSource:0}: Error finding container 8f488cf47911624b2602307a803e61bfa6c0bb17b90db76b3695e13cb3c47c3d: Status 404 returned error can't find the container with id 8f488cf47911624b2602307a803e61bfa6c0bb17b90db76b3695e13cb3c47c3d Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.746749 5108 generic.go:358] "Generic (PLEG): container finished" podID="d7aa0e7e-b827-48db-b42e-bd862f760149" containerID="eb2aa845593e63c9bf2880ef055f59d8d450452cc1b6e0205c182f240778abfd" exitCode=0 Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.746843 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkx7t" event={"ID":"d7aa0e7e-b827-48db-b42e-bd862f760149","Type":"ContainerDied","Data":"eb2aa845593e63c9bf2880ef055f59d8d450452cc1b6e0205c182f240778abfd"} Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.749912 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" event={"ID":"6295587b-b532-4617-8753-39d7cac47227","Type":"ContainerStarted","Data":"4125a2741fbd33135ccaa28a94612fd26421ebe2260fb831139e6d8a3882386b"} Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.749975 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" event={"ID":"6295587b-b532-4617-8753-39d7cac47227","Type":"ContainerStarted","Data":"8f488cf47911624b2602307a803e61bfa6c0bb17b90db76b3695e13cb3c47c3d"} Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.750013 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.753717 5108 generic.go:358] "Generic (PLEG): container finished" podID="62fe17f4-5665-44f0-b006-7082ad6b29e7" containerID="0e757b7da07ec2bb5f9e36c2c378c6c0db61fdd02257e64adae2ae3bdea15e2e" exitCode=0 Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.754177 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qln" event={"ID":"62fe17f4-5665-44f0-b006-7082ad6b29e7","Type":"ContainerDied","Data":"0e757b7da07ec2bb5f9e36c2c378c6c0db61fdd02257e64adae2ae3bdea15e2e"} Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.754246 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qln" event={"ID":"62fe17f4-5665-44f0-b006-7082ad6b29e7","Type":"ContainerStarted","Data":"46f7d5801d92715947ab2622a047cf0e58185c31e14e42e37b214f147fcedb03"} Jan 04 00:16:43 crc kubenswrapper[5108]: I0104 00:16:43.843660 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" podStartSLOduration=1.8436349079999999 podStartE2EDuration="1.843634908s" podCreationTimestamp="2026-01-04 00:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:16:43.838070657 +0000 UTC m=+377.826635743" watchObservedRunningTime="2026-01-04 00:16:43.843634908 +0000 UTC m=+377.832199994" Jan 04 00:16:44 crc kubenswrapper[5108]: I0104 00:16:44.762128 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qln" event={"ID":"62fe17f4-5665-44f0-b006-7082ad6b29e7","Type":"ContainerStarted","Data":"5539c9327a66315b61e9c1a81758c6ae1f0c8a2e6934b3f40b0ba3fe4280a74e"} Jan 04 00:16:44 crc kubenswrapper[5108]: I0104 00:16:44.764943 5108 generic.go:358] "Generic (PLEG): container finished" podID="d7aa0e7e-b827-48db-b42e-bd862f760149" containerID="fefaec6aab221b084dd1a286b41cffe0f2c39e702f2bec3ad0099690034c66dc" exitCode=0 Jan 04 00:16:44 crc kubenswrapper[5108]: I0104 00:16:44.766256 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkx7t" event={"ID":"d7aa0e7e-b827-48db-b42e-bd862f760149","Type":"ContainerDied","Data":"fefaec6aab221b084dd1a286b41cffe0f2c39e702f2bec3ad0099690034c66dc"} Jan 04 00:16:45 crc kubenswrapper[5108]: I0104 00:16:45.773179 5108 generic.go:358] "Generic (PLEG): container finished" podID="62fe17f4-5665-44f0-b006-7082ad6b29e7" containerID="5539c9327a66315b61e9c1a81758c6ae1f0c8a2e6934b3f40b0ba3fe4280a74e" exitCode=0 Jan 04 00:16:45 crc kubenswrapper[5108]: I0104 00:16:45.773292 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qln" event={"ID":"62fe17f4-5665-44f0-b006-7082ad6b29e7","Type":"ContainerDied","Data":"5539c9327a66315b61e9c1a81758c6ae1f0c8a2e6934b3f40b0ba3fe4280a74e"} Jan 04 00:16:45 crc kubenswrapper[5108]: I0104 00:16:45.776788 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkx7t" event={"ID":"d7aa0e7e-b827-48db-b42e-bd862f760149","Type":"ContainerStarted","Data":"f1299f7580487d1d3ab7c70058c259d96db44a2d8a2b68e496ec4391dde6cef9"} Jan 04 00:16:45 crc kubenswrapper[5108]: I0104 00:16:45.819641 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kkx7t" podStartSLOduration=4.219046296 podStartE2EDuration="4.819616415s" podCreationTimestamp="2026-01-04 00:16:41 +0000 UTC" firstStartedPulling="2026-01-04 00:16:43.748120294 +0000 UTC m=+377.736685380" lastFinishedPulling="2026-01-04 00:16:44.348690403 +0000 UTC m=+378.337255499" observedRunningTime="2026-01-04 00:16:45.817618862 +0000 UTC m=+379.806183948" watchObservedRunningTime="2026-01-04 00:16:45.819616415 +0000 UTC m=+379.808181501" Jan 04 00:16:46 crc kubenswrapper[5108]: I0104 00:16:46.788926 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qln" event={"ID":"62fe17f4-5665-44f0-b006-7082ad6b29e7","Type":"ContainerStarted","Data":"d8571e66b4755fdda5c95da1bb2c94daf2d13bdc354229f6d62804a60e902283"} Jan 04 00:16:46 crc kubenswrapper[5108]: I0104 00:16:46.811076 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-42qln" podStartSLOduration=4.154043675 podStartE2EDuration="4.811048302s" podCreationTimestamp="2026-01-04 00:16:42 +0000 UTC" firstStartedPulling="2026-01-04 00:16:43.755481213 +0000 UTC m=+377.744046309" lastFinishedPulling="2026-01-04 00:16:44.41248585 +0000 UTC m=+378.401050936" observedRunningTime="2026-01-04 00:16:46.806052537 +0000 UTC m=+380.794617633" watchObservedRunningTime="2026-01-04 00:16:46.811048302 +0000 UTC m=+380.799613398" Jan 04 00:16:49 crc kubenswrapper[5108]: I0104 00:16:49.852942 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:49 crc kubenswrapper[5108]: I0104 00:16:49.855015 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:49 crc kubenswrapper[5108]: I0104 00:16:49.904822 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:50 crc kubenswrapper[5108]: I0104 00:16:50.028802 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:50 crc kubenswrapper[5108]: I0104 00:16:50.028919 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:50 crc kubenswrapper[5108]: I0104 00:16:50.072495 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:50 crc kubenswrapper[5108]: I0104 00:16:50.860621 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:16:50 crc kubenswrapper[5108]: I0104 00:16:50.861052 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w78tn" Jan 04 00:16:52 crc kubenswrapper[5108]: I0104 00:16:52.229438 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:52 crc kubenswrapper[5108]: I0104 00:16:52.231874 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:52 crc kubenswrapper[5108]: I0104 00:16:52.287987 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:16:52 crc kubenswrapper[5108]: I0104 00:16:52.422413 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:52 crc kubenswrapper[5108]: I0104 00:16:52.422478 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:52 crc kubenswrapper[5108]: I0104 00:16:52.470737 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:52 crc kubenswrapper[5108]: I0104 00:16:52.871995 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-42qln" Jan 04 00:16:52 crc kubenswrapper[5108]: I0104 00:16:52.876959 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kkx7t" Jan 04 00:17:04 crc kubenswrapper[5108]: I0104 00:17:04.771255 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-jnvc5" Jan 04 00:17:04 crc kubenswrapper[5108]: I0104 00:17:04.833377 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nbqsh"] Jan 04 00:17:29 crc kubenswrapper[5108]: I0104 00:17:29.872764 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" podUID="7c39a999-644f-43cd-b7e6-c7fd14281924" containerName="registry" containerID="cri-o://27fc4746e1db1913a84903d9ef912507087ecce90b8ad55ea0c4ddb5efbfe999" gracePeriod=30 Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.105534 5108 generic.go:358] "Generic (PLEG): container finished" podID="7c39a999-644f-43cd-b7e6-c7fd14281924" containerID="27fc4746e1db1913a84903d9ef912507087ecce90b8ad55ea0c4ddb5efbfe999" exitCode=0 Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.105692 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" event={"ID":"7c39a999-644f-43cd-b7e6-c7fd14281924","Type":"ContainerDied","Data":"27fc4746e1db1913a84903d9ef912507087ecce90b8ad55ea0c4ddb5efbfe999"} Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.328045 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.489527 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-trusted-ca\") pod \"7c39a999-644f-43cd-b7e6-c7fd14281924\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.489623 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-tls\") pod \"7c39a999-644f-43cd-b7e6-c7fd14281924\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.489708 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c39a999-644f-43cd-b7e6-c7fd14281924-ca-trust-extracted\") pod \"7c39a999-644f-43cd-b7e6-c7fd14281924\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.489779 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-certificates\") pod \"7c39a999-644f-43cd-b7e6-c7fd14281924\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.490074 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"7c39a999-644f-43cd-b7e6-c7fd14281924\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.490174 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c39a999-644f-43cd-b7e6-c7fd14281924-installation-pull-secrets\") pod \"7c39a999-644f-43cd-b7e6-c7fd14281924\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.490302 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtqq5\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-kube-api-access-wtqq5\") pod \"7c39a999-644f-43cd-b7e6-c7fd14281924\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.490341 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-bound-sa-token\") pod \"7c39a999-644f-43cd-b7e6-c7fd14281924\" (UID: \"7c39a999-644f-43cd-b7e6-c7fd14281924\") " Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.492716 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7c39a999-644f-43cd-b7e6-c7fd14281924" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.493084 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7c39a999-644f-43cd-b7e6-c7fd14281924" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.500699 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7c39a999-644f-43cd-b7e6-c7fd14281924" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.501826 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c39a999-644f-43cd-b7e6-c7fd14281924-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7c39a999-644f-43cd-b7e6-c7fd14281924" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.502760 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7c39a999-644f-43cd-b7e6-c7fd14281924" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.503829 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-kube-api-access-wtqq5" (OuterVolumeSpecName: "kube-api-access-wtqq5") pod "7c39a999-644f-43cd-b7e6-c7fd14281924" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924"). InnerVolumeSpecName "kube-api-access-wtqq5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.514659 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "7c39a999-644f-43cd-b7e6-c7fd14281924" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.522366 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c39a999-644f-43cd-b7e6-c7fd14281924-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7c39a999-644f-43cd-b7e6-c7fd14281924" (UID: "7c39a999-644f-43cd-b7e6-c7fd14281924"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.592734 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wtqq5\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-kube-api-access-wtqq5\") on node \"crc\" DevicePath \"\"" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.592789 5108 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.592804 5108 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.592820 5108 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.592832 5108 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c39a999-644f-43cd-b7e6-c7fd14281924-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.592844 5108 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c39a999-644f-43cd-b7e6-c7fd14281924-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 04 00:17:30 crc kubenswrapper[5108]: I0104 00:17:30.592858 5108 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c39a999-644f-43cd-b7e6-c7fd14281924-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:17:31 crc kubenswrapper[5108]: I0104 00:17:31.114618 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" Jan 04 00:17:31 crc kubenswrapper[5108]: I0104 00:17:31.114633 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" event={"ID":"7c39a999-644f-43cd-b7e6-c7fd14281924","Type":"ContainerDied","Data":"9abcd8ed4c62866af4e464ee1f9e0ba733b5018b9858170b273192eeea514bfd"} Jan 04 00:17:31 crc kubenswrapper[5108]: I0104 00:17:31.114767 5108 scope.go:117] "RemoveContainer" containerID="27fc4746e1db1913a84903d9ef912507087ecce90b8ad55ea0c4ddb5efbfe999" Jan 04 00:17:31 crc kubenswrapper[5108]: I0104 00:17:31.153646 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nbqsh"] Jan 04 00:17:31 crc kubenswrapper[5108]: I0104 00:17:31.159085 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nbqsh"] Jan 04 00:17:32 crc kubenswrapper[5108]: I0104 00:17:32.459931 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c39a999-644f-43cd-b7e6-c7fd14281924" path="/var/lib/kubelet/pods/7c39a999-644f-43cd-b7e6-c7fd14281924/volumes" Jan 04 00:17:35 crc kubenswrapper[5108]: I0104 00:17:35.139352 5108 patch_prober.go:28] interesting pod/image-registry-66587d64c8-nbqsh container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.23:5000/healthz\": context deadline exceeded" start-of-body= Jan 04 00:17:35 crc kubenswrapper[5108]: I0104 00:17:35.139461 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-nbqsh" podUID="7c39a999-644f-43cd-b7e6-c7fd14281924" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.23:5000/healthz\": context deadline exceeded" Jan 04 00:17:54 crc kubenswrapper[5108]: I0104 00:17:54.917063 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:17:54 crc kubenswrapper[5108]: I0104 00:17:54.918825 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.140740 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458098-5vhdx"] Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.143171 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c39a999-644f-43cd-b7e6-c7fd14281924" containerName="registry" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.143190 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c39a999-644f-43cd-b7e6-c7fd14281924" containerName="registry" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.143332 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7c39a999-644f-43cd-b7e6-c7fd14281924" containerName="registry" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.148426 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458098-5vhdx" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.151523 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.152893 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.154258 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458098-5vhdx"] Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.155193 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.196140 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-968gx\" (UniqueName: \"kubernetes.io/projected/059ddc1f-fb99-4798-a5cf-c91d217c2763-kube-api-access-968gx\") pod \"auto-csr-approver-29458098-5vhdx\" (UID: \"059ddc1f-fb99-4798-a5cf-c91d217c2763\") " pod="openshift-infra/auto-csr-approver-29458098-5vhdx" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.297985 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-968gx\" (UniqueName: \"kubernetes.io/projected/059ddc1f-fb99-4798-a5cf-c91d217c2763-kube-api-access-968gx\") pod \"auto-csr-approver-29458098-5vhdx\" (UID: \"059ddc1f-fb99-4798-a5cf-c91d217c2763\") " pod="openshift-infra/auto-csr-approver-29458098-5vhdx" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.340168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-968gx\" (UniqueName: \"kubernetes.io/projected/059ddc1f-fb99-4798-a5cf-c91d217c2763-kube-api-access-968gx\") pod \"auto-csr-approver-29458098-5vhdx\" (UID: \"059ddc1f-fb99-4798-a5cf-c91d217c2763\") " pod="openshift-infra/auto-csr-approver-29458098-5vhdx" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.476253 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458098-5vhdx" Jan 04 00:18:00 crc kubenswrapper[5108]: I0104 00:18:00.705124 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458098-5vhdx"] Jan 04 00:18:01 crc kubenswrapper[5108]: I0104 00:18:01.361035 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458098-5vhdx" event={"ID":"059ddc1f-fb99-4798-a5cf-c91d217c2763","Type":"ContainerStarted","Data":"67964c4b3fc5b68bfa88426ebc77b98d53951b9583bb9a05d67abab828c58833"} Jan 04 00:18:05 crc kubenswrapper[5108]: I0104 00:18:05.182748 5108 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-wtl9b" Jan 04 00:18:05 crc kubenswrapper[5108]: I0104 00:18:05.207174 5108 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-wtl9b" Jan 04 00:18:05 crc kubenswrapper[5108]: I0104 00:18:05.395690 5108 generic.go:358] "Generic (PLEG): container finished" podID="059ddc1f-fb99-4798-a5cf-c91d217c2763" containerID="f9066031eef2a52fcb566a68d4929f8db3dde5dfa8247dae8078a6e3831d64ed" exitCode=0 Jan 04 00:18:05 crc kubenswrapper[5108]: I0104 00:18:05.395754 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458098-5vhdx" event={"ID":"059ddc1f-fb99-4798-a5cf-c91d217c2763","Type":"ContainerDied","Data":"f9066031eef2a52fcb566a68d4929f8db3dde5dfa8247dae8078a6e3831d64ed"} Jan 04 00:18:06 crc kubenswrapper[5108]: I0104 00:18:06.209639 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-03 00:13:05 +0000 UTC" deadline="2026-01-30 07:04:01.863219962 +0000 UTC" Jan 04 00:18:06 crc kubenswrapper[5108]: I0104 00:18:06.209711 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="630h45m55.653513409s" Jan 04 00:18:06 crc kubenswrapper[5108]: I0104 00:18:06.719000 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458098-5vhdx" Jan 04 00:18:06 crc kubenswrapper[5108]: I0104 00:18:06.797270 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-968gx\" (UniqueName: \"kubernetes.io/projected/059ddc1f-fb99-4798-a5cf-c91d217c2763-kube-api-access-968gx\") pod \"059ddc1f-fb99-4798-a5cf-c91d217c2763\" (UID: \"059ddc1f-fb99-4798-a5cf-c91d217c2763\") " Jan 04 00:18:06 crc kubenswrapper[5108]: I0104 00:18:06.805906 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/059ddc1f-fb99-4798-a5cf-c91d217c2763-kube-api-access-968gx" (OuterVolumeSpecName: "kube-api-access-968gx") pod "059ddc1f-fb99-4798-a5cf-c91d217c2763" (UID: "059ddc1f-fb99-4798-a5cf-c91d217c2763"). InnerVolumeSpecName "kube-api-access-968gx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:18:06 crc kubenswrapper[5108]: I0104 00:18:06.899142 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-968gx\" (UniqueName: \"kubernetes.io/projected/059ddc1f-fb99-4798-a5cf-c91d217c2763-kube-api-access-968gx\") on node \"crc\" DevicePath \"\"" Jan 04 00:18:07 crc kubenswrapper[5108]: I0104 00:18:07.210350 5108 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-03 00:13:05 +0000 UTC" deadline="2026-01-29 01:44:37.323940296 +0000 UTC" Jan 04 00:18:07 crc kubenswrapper[5108]: I0104 00:18:07.210388 5108 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="601h26m30.113553724s" Jan 04 00:18:07 crc kubenswrapper[5108]: I0104 00:18:07.414618 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458098-5vhdx" Jan 04 00:18:07 crc kubenswrapper[5108]: I0104 00:18:07.414726 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458098-5vhdx" event={"ID":"059ddc1f-fb99-4798-a5cf-c91d217c2763","Type":"ContainerDied","Data":"67964c4b3fc5b68bfa88426ebc77b98d53951b9583bb9a05d67abab828c58833"} Jan 04 00:18:07 crc kubenswrapper[5108]: I0104 00:18:07.415236 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67964c4b3fc5b68bfa88426ebc77b98d53951b9583bb9a05d67abab828c58833" Jan 04 00:18:24 crc kubenswrapper[5108]: I0104 00:18:24.916859 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:18:24 crc kubenswrapper[5108]: I0104 00:18:24.917713 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:18:54 crc kubenswrapper[5108]: I0104 00:18:54.917568 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:18:54 crc kubenswrapper[5108]: I0104 00:18:54.918639 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:18:54 crc kubenswrapper[5108]: I0104 00:18:54.918732 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:18:54 crc kubenswrapper[5108]: I0104 00:18:54.920589 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98c0ce6db2062cf99e2e7a19595c98fef731421d446df51d11c001f56a4c3cd2"} pod="openshift-machine-config-operator/machine-config-daemon-njl5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 04 00:18:54 crc kubenswrapper[5108]: I0104 00:18:54.921070 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" containerID="cri-o://98c0ce6db2062cf99e2e7a19595c98fef731421d446df51d11c001f56a4c3cd2" gracePeriod=600 Jan 04 00:18:55 crc kubenswrapper[5108]: I0104 00:18:55.785450 5108 generic.go:358] "Generic (PLEG): container finished" podID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerID="98c0ce6db2062cf99e2e7a19595c98fef731421d446df51d11c001f56a4c3cd2" exitCode=0 Jan 04 00:18:55 crc kubenswrapper[5108]: I0104 00:18:55.785568 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerDied","Data":"98c0ce6db2062cf99e2e7a19595c98fef731421d446df51d11c001f56a4c3cd2"} Jan 04 00:18:55 crc kubenswrapper[5108]: I0104 00:18:55.786580 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"335e8dafd09ef6d4b5814847b54a00f48c49785e811fdaed2b4bdcd55dc20429"} Jan 04 00:18:55 crc kubenswrapper[5108]: I0104 00:18:55.786626 5108 scope.go:117] "RemoveContainer" containerID="94f4e2cbc916293b4e6676fb0b3fe4568b76f062b4ce243281ad611c1958954a" Jan 04 00:19:26 crc kubenswrapper[5108]: I0104 00:19:26.818223 5108 scope.go:117] "RemoveContainer" containerID="af0652253cfcf907c4112a70d8311252aebd9976a1eab822bf19292256c3765d" Jan 04 00:19:26 crc kubenswrapper[5108]: I0104 00:19:26.856130 5108 scope.go:117] "RemoveContainer" containerID="6718602829d7187e179e3d9a5a97cb615d69b68332d5b22facd1a7ce05049c18" Jan 04 00:19:26 crc kubenswrapper[5108]: I0104 00:19:26.890584 5108 scope.go:117] "RemoveContainer" containerID="d09590af24f0083c4ffcfd8bbf55561836823b57f2d135fe7982b8a97fab80bf" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.137666 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458100-t857c"] Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.139266 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="059ddc1f-fb99-4798-a5cf-c91d217c2763" containerName="oc" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.139284 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="059ddc1f-fb99-4798-a5cf-c91d217c2763" containerName="oc" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.139402 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="059ddc1f-fb99-4798-a5cf-c91d217c2763" containerName="oc" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.143154 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458100-t857c" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.146349 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.146990 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.149114 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458100-t857c"] Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.149667 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.191340 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdsq4\" (UniqueName: \"kubernetes.io/projected/53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a-kube-api-access-fdsq4\") pod \"auto-csr-approver-29458100-t857c\" (UID: \"53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a\") " pod="openshift-infra/auto-csr-approver-29458100-t857c" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.292777 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdsq4\" (UniqueName: \"kubernetes.io/projected/53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a-kube-api-access-fdsq4\") pod \"auto-csr-approver-29458100-t857c\" (UID: \"53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a\") " pod="openshift-infra/auto-csr-approver-29458100-t857c" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.316900 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdsq4\" (UniqueName: \"kubernetes.io/projected/53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a-kube-api-access-fdsq4\") pod \"auto-csr-approver-29458100-t857c\" (UID: \"53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a\") " pod="openshift-infra/auto-csr-approver-29458100-t857c" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.507445 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458100-t857c" Jan 04 00:20:00 crc kubenswrapper[5108]: I0104 00:20:00.769956 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458100-t857c"] Jan 04 00:20:01 crc kubenswrapper[5108]: I0104 00:20:01.289026 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458100-t857c" event={"ID":"53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a","Type":"ContainerStarted","Data":"1517359590e3408f69bbb368af1dcb09f104eda4c1f189858b5d974435b8c21d"} Jan 04 00:20:03 crc kubenswrapper[5108]: I0104 00:20:03.307147 5108 generic.go:358] "Generic (PLEG): container finished" podID="53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a" containerID="2ff78851b11fc0a028c6db8544eab5c51ff187424527b341b724a10a42d50636" exitCode=0 Jan 04 00:20:03 crc kubenswrapper[5108]: I0104 00:20:03.307457 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458100-t857c" event={"ID":"53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a","Type":"ContainerDied","Data":"2ff78851b11fc0a028c6db8544eab5c51ff187424527b341b724a10a42d50636"} Jan 04 00:20:04 crc kubenswrapper[5108]: I0104 00:20:04.572136 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458100-t857c" Jan 04 00:20:04 crc kubenswrapper[5108]: I0104 00:20:04.579062 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdsq4\" (UniqueName: \"kubernetes.io/projected/53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a-kube-api-access-fdsq4\") pod \"53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a\" (UID: \"53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a\") " Jan 04 00:20:04 crc kubenswrapper[5108]: I0104 00:20:04.589442 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a-kube-api-access-fdsq4" (OuterVolumeSpecName: "kube-api-access-fdsq4") pod "53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a" (UID: "53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a"). InnerVolumeSpecName "kube-api-access-fdsq4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:20:04 crc kubenswrapper[5108]: I0104 00:20:04.680735 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdsq4\" (UniqueName: \"kubernetes.io/projected/53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a-kube-api-access-fdsq4\") on node \"crc\" DevicePath \"\"" Jan 04 00:20:05 crc kubenswrapper[5108]: I0104 00:20:05.327435 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458100-t857c" event={"ID":"53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a","Type":"ContainerDied","Data":"1517359590e3408f69bbb368af1dcb09f104eda4c1f189858b5d974435b8c21d"} Jan 04 00:20:05 crc kubenswrapper[5108]: I0104 00:20:05.328028 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1517359590e3408f69bbb368af1dcb09f104eda4c1f189858b5d974435b8c21d" Jan 04 00:20:05 crc kubenswrapper[5108]: I0104 00:20:05.327530 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458100-t857c" Jan 04 00:20:26 crc kubenswrapper[5108]: I0104 00:20:26.715396 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:20:26 crc kubenswrapper[5108]: I0104 00:20:26.716845 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:21:24 crc kubenswrapper[5108]: I0104 00:21:24.917838 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:21:24 crc kubenswrapper[5108]: I0104 00:21:24.919358 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:21:54 crc kubenswrapper[5108]: I0104 00:21:54.917283 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:21:54 crc kubenswrapper[5108]: I0104 00:21:54.918369 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.138288 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458102-msftx"] Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.139612 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a" containerName="oc" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.139629 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a" containerName="oc" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.139739 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a" containerName="oc" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.280883 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458102-msftx"] Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.281143 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458102-msftx" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.283642 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.284907 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.285074 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.311902 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gj6q\" (UniqueName: \"kubernetes.io/projected/d23c37b5-6c23-48f9-960a-a9c174d8430c-kube-api-access-9gj6q\") pod \"auto-csr-approver-29458102-msftx\" (UID: \"d23c37b5-6c23-48f9-960a-a9c174d8430c\") " pod="openshift-infra/auto-csr-approver-29458102-msftx" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.413342 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9gj6q\" (UniqueName: \"kubernetes.io/projected/d23c37b5-6c23-48f9-960a-a9c174d8430c-kube-api-access-9gj6q\") pod \"auto-csr-approver-29458102-msftx\" (UID: \"d23c37b5-6c23-48f9-960a-a9c174d8430c\") " pod="openshift-infra/auto-csr-approver-29458102-msftx" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.436373 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gj6q\" (UniqueName: \"kubernetes.io/projected/d23c37b5-6c23-48f9-960a-a9c174d8430c-kube-api-access-9gj6q\") pod \"auto-csr-approver-29458102-msftx\" (UID: \"d23c37b5-6c23-48f9-960a-a9c174d8430c\") " pod="openshift-infra/auto-csr-approver-29458102-msftx" Jan 04 00:22:00 crc kubenswrapper[5108]: I0104 00:22:00.601987 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458102-msftx" Jan 04 00:22:01 crc kubenswrapper[5108]: I0104 00:22:01.033552 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458102-msftx"] Jan 04 00:22:01 crc kubenswrapper[5108]: I0104 00:22:01.044151 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 04 00:22:01 crc kubenswrapper[5108]: I0104 00:22:01.171869 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458102-msftx" event={"ID":"d23c37b5-6c23-48f9-960a-a9c174d8430c","Type":"ContainerStarted","Data":"e1d90542555259ab029674f0eb22f8738bdc2874f4c0ddcaaad6e761e5752363"} Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.528033 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz"] Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.528891 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" podUID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerName="kube-rbac-proxy" containerID="cri-o://afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.528996 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" podUID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerName="ovnkube-cluster-manager" containerID="cri-o://f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.736137 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nhl4w"] Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.736818 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovn-controller" containerID="cri-o://07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.736879 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="northd" containerID="cri-o://ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.736961 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="sbdb" containerID="cri-o://b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.737000 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovn-acl-logging" containerID="cri-o://44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.737004 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kube-rbac-proxy-node" containerID="cri-o://71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.737061 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.737285 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="nbdb" containerID="cri-o://2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.765626 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovnkube-controller" containerID="cri-o://4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981" gracePeriod=30 Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.799510 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.838916 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj"] Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.839800 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerName="kube-rbac-proxy" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.839829 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerName="kube-rbac-proxy" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.839847 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerName="ovnkube-cluster-manager" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.839854 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerName="ovnkube-cluster-manager" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.839989 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerName="ovnkube-cluster-manager" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.840005 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerName="kube-rbac-proxy" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.847079 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.853916 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-env-overrides\") pod \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.853977 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovnkube-config\") pod \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.854153 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovn-control-plane-metrics-cert\") pod \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.854355 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2v7q\" (UniqueName: \"kubernetes.io/projected/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-kube-api-access-z2v7q\") pod \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\" (UID: \"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6\") " Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.855215 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" (UID: "2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.856500 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" (UID: "2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.873440 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-kube-api-access-z2v7q" (OuterVolumeSpecName: "kube-api-access-z2v7q") pod "2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" (UID: "2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6"). InnerVolumeSpecName "kube-api-access-z2v7q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.874853 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" (UID: "2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.955642 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f087406-1da8-4fcf-8808-a54498e8d36c-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.955735 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f087406-1da8-4fcf-8808-a54498e8d36c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.955774 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f087406-1da8-4fcf-8808-a54498e8d36c-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.955806 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5knh\" (UniqueName: \"kubernetes.io/projected/9f087406-1da8-4fcf-8808-a54498e8d36c-kube-api-access-q5knh\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.955846 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z2v7q\" (UniqueName: \"kubernetes.io/projected/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-kube-api-access-z2v7q\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.955861 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.955870 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:02 crc kubenswrapper[5108]: I0104 00:22:02.955879 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.057869 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f087406-1da8-4fcf-8808-a54498e8d36c-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.057949 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5knh\" (UniqueName: \"kubernetes.io/projected/9f087406-1da8-4fcf-8808-a54498e8d36c-kube-api-access-q5knh\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.057996 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f087406-1da8-4fcf-8808-a54498e8d36c-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.058067 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f087406-1da8-4fcf-8808-a54498e8d36c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.059498 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f087406-1da8-4fcf-8808-a54498e8d36c-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.059566 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f087406-1da8-4fcf-8808-a54498e8d36c-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.066077 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f087406-1da8-4fcf-8808-a54498e8d36c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.080317 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5knh\" (UniqueName: \"kubernetes.io/projected/9f087406-1da8-4fcf-8808-a54498e8d36c-kube-api-access-q5knh\") pod \"ovnkube-control-plane-97c9b6c48-pz6jj\" (UID: \"9f087406-1da8-4fcf-8808-a54498e8d36c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.200376 5108 generic.go:358] "Generic (PLEG): container finished" podID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerID="f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17" exitCode=0 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.200420 5108 generic.go:358] "Generic (PLEG): container finished" podID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" containerID="afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794" exitCode=0 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.200444 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" event={"ID":"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6","Type":"ContainerDied","Data":"f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.200510 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" event={"ID":"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6","Type":"ContainerDied","Data":"afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.200526 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" event={"ID":"2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6","Type":"ContainerDied","Data":"80a86e31c3e4fac2b225a746ba153cced16ff4d887b302f70c8da3431dee0c21"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.200541 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.200592 5108 scope.go:117] "RemoveContainer" containerID="f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.207221 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.207270 5108 generic.go:358] "Generic (PLEG): container finished" podID="8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23" containerID="7992dd8c360b5fa59546180b8358b2f8950b3b1a60bddb47c4b085abd26fee5f" exitCode=2 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.207381 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rzs5n" event={"ID":"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23","Type":"ContainerDied","Data":"7992dd8c360b5fa59546180b8358b2f8950b3b1a60bddb47c4b085abd26fee5f"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.208260 5108 scope.go:117] "RemoveContainer" containerID="7992dd8c360b5fa59546180b8358b2f8950b3b1a60bddb47c4b085abd26fee5f" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.209965 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.211420 5108 generic.go:358] "Generic (PLEG): container finished" podID="d23c37b5-6c23-48f9-960a-a9c174d8430c" containerID="18abcf584a10658b74f08503746f145aa65528f4db2db21b58910df46c712b62" exitCode=0 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.211541 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458102-msftx" event={"ID":"d23c37b5-6c23-48f9-960a-a9c174d8430c","Type":"ContainerDied","Data":"18abcf584a10658b74f08503746f145aa65528f4db2db21b58910df46c712b62"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.219108 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nhl4w_20d6d69a-45c2-4c35-8a5d-22d3815de8e5/ovn-acl-logging/0.log" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.220252 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nhl4w_20d6d69a-45c2-4c35-8a5d-22d3815de8e5/ovn-controller/0.log" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221655 5108 generic.go:358] "Generic (PLEG): container finished" podID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerID="4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981" exitCode=0 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221701 5108 generic.go:358] "Generic (PLEG): container finished" podID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerID="b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d" exitCode=0 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221714 5108 generic.go:358] "Generic (PLEG): container finished" podID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerID="2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a" exitCode=0 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221725 5108 generic.go:358] "Generic (PLEG): container finished" podID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerID="9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428" exitCode=0 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221735 5108 generic.go:358] "Generic (PLEG): container finished" podID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerID="71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b" exitCode=0 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221748 5108 generic.go:358] "Generic (PLEG): container finished" podID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerID="44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867" exitCode=143 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221758 5108 generic.go:358] "Generic (PLEG): container finished" podID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerID="07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d" exitCode=143 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221766 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221820 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221836 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221848 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221857 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221869 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.221881 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d"} Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.250457 5108 scope.go:117] "RemoveContainer" containerID="afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794" Jan 04 00:22:03 crc kubenswrapper[5108]: W0104 00:22:03.254608 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f087406_1da8_4fcf_8808_a54498e8d36c.slice/crio-ef55370e9e0696d3562f95b0ece6c6a520e18209f85b795f5b275b62aa82d519 WatchSource:0}: Error finding container ef55370e9e0696d3562f95b0ece6c6a520e18209f85b795f5b275b62aa82d519: Status 404 returned error can't find the container with id ef55370e9e0696d3562f95b0ece6c6a520e18209f85b795f5b275b62aa82d519 Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.296045 5108 scope.go:117] "RemoveContainer" containerID="f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17" Jan 04 00:22:03 crc kubenswrapper[5108]: E0104 00:22:03.296555 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17\": container with ID starting with f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17 not found: ID does not exist" containerID="f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.296594 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17"} err="failed to get container status \"f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17\": rpc error: code = NotFound desc = could not find container \"f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17\": container with ID starting with f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17 not found: ID does not exist" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.296616 5108 scope.go:117] "RemoveContainer" containerID="afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794" Jan 04 00:22:03 crc kubenswrapper[5108]: E0104 00:22:03.297297 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794\": container with ID starting with afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794 not found: ID does not exist" containerID="afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.297386 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794"} err="failed to get container status \"afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794\": rpc error: code = NotFound desc = could not find container \"afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794\": container with ID starting with afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794 not found: ID does not exist" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.297437 5108 scope.go:117] "RemoveContainer" containerID="f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.297753 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz"] Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.297925 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17"} err="failed to get container status \"f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17\": rpc error: code = NotFound desc = could not find container \"f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17\": container with ID starting with f059fad77f4bb45289af0a83924f281206b30b9544535ab7c9f8809e9311fc17 not found: ID does not exist" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.297994 5108 scope.go:117] "RemoveContainer" containerID="afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.298383 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794"} err="failed to get container status \"afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794\": rpc error: code = NotFound desc = could not find container \"afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794\": container with ID starting with afba35e34ce5c46b9f44a8526e363fb7bc8a18271e12c3472d80cc34143fa794 not found: ID does not exist" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.300872 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d8pjz"] Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.586357 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nhl4w_20d6d69a-45c2-4c35-8a5d-22d3815de8e5/ovn-acl-logging/0.log" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.586999 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nhl4w_20d6d69a-45c2-4c35-8a5d-22d3815de8e5/ovn-controller/0.log" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.587534 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.653165 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2n54k"] Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654099 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="sbdb" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654129 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="sbdb" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654142 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovn-acl-logging" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654149 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovn-acl-logging" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654160 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kube-rbac-proxy-ovn-metrics" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654171 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kube-rbac-proxy-ovn-metrics" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654190 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kube-rbac-proxy-node" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654216 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kube-rbac-proxy-node" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654227 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="nbdb" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654232 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="nbdb" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654242 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovn-controller" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654247 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovn-controller" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654258 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="northd" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654263 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="northd" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654270 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kubecfg-setup" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654275 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kubecfg-setup" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654281 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovnkube-controller" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654286 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovnkube-controller" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654383 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovn-controller" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654394 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovn-acl-logging" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654402 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="nbdb" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654414 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kube-rbac-proxy-ovn-metrics" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654421 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="kube-rbac-proxy-node" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654434 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="sbdb" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654443 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="northd" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.654452 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerName="ovnkube-controller" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.659459 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666103 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-etc-openvswitch\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666181 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-slash\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666219 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-netns\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666233 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666238 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-ovn-kubernetes\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666293 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666327 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-slash" (OuterVolumeSpecName: "host-slash") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666349 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666360 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph7rp\" (UniqueName: \"kubernetes.io/projected/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-kube-api-access-ph7rp\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666399 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-ovn\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666439 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-openvswitch\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666471 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666505 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-var-lib-openvswitch\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666528 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-netd\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666578 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-log-socket\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666708 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-bin\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666747 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-systemd\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666769 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-systemd-units\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666820 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-node-log\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666868 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovn-node-metrics-cert\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666904 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-env-overrides\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666937 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-script-lib\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.666993 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-kubelet\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667131 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-config\") pod \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\" (UID: \"20d6d69a-45c2-4c35-8a5d-22d3815de8e5\") " Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667605 5108 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667620 5108 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-slash\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667630 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667641 5108 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667687 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667728 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667756 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667790 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667855 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667885 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.667917 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-log-socket" (OuterVolumeSpecName: "log-socket") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.668520 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.668567 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.668944 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.669001 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.669007 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.669051 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-node-log" (OuterVolumeSpecName: "node-log") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.674430 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.675392 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-kube-api-access-ph7rp" (OuterVolumeSpecName: "kube-api-access-ph7rp") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "kube-api-access-ph7rp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.692008 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "20d6d69a-45c2-4c35-8a5d-22d3815de8e5" (UID: "20d6d69a-45c2-4c35-8a5d-22d3815de8e5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.768873 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-slash\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.768994 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e5983de4-b9c0-4e89-8ee9-159125653050-ovnkube-config\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769025 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-etc-openvswitch\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769057 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-run-systemd\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769077 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-run-openvswitch\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769235 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-var-lib-openvswitch\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769312 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-run-ovn-kubernetes\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769339 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e5983de4-b9c0-4e89-8ee9-159125653050-env-overrides\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769403 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl6pr\" (UniqueName: \"kubernetes.io/projected/e5983de4-b9c0-4e89-8ee9-159125653050-kube-api-access-fl6pr\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769529 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-log-socket\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769620 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e5983de4-b9c0-4e89-8ee9-159125653050-ovnkube-script-lib\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769763 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e5983de4-b9c0-4e89-8ee9-159125653050-ovn-node-metrics-cert\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.769803 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770155 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-cni-netd\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770235 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-node-log\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770267 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-cni-bin\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770329 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-systemd-units\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770401 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-kubelet\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770462 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-run-ovn\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770525 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-run-netns\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770664 5108 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770682 5108 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770697 5108 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770714 5108 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770729 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770741 5108 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-log-socket\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770754 5108 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770766 5108 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770777 5108 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770787 5108 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-node-log\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770799 5108 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770810 5108 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770821 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770833 5108 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770844 5108 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.770858 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ph7rp\" (UniqueName: \"kubernetes.io/projected/20d6d69a-45c2-4c35-8a5d-22d3815de8e5-kube-api-access-ph7rp\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.872912 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-cni-bin\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873488 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-systemd-units\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873522 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-kubelet\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873543 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-run-ovn\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873570 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-run-netns\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873599 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-slash\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873624 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e5983de4-b9c0-4e89-8ee9-159125653050-ovnkube-config\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873646 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-etc-openvswitch\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873671 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-run-systemd\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873689 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-run-openvswitch\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873709 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-var-lib-openvswitch\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873730 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-run-ovn-kubernetes\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873753 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e5983de4-b9c0-4e89-8ee9-159125653050-env-overrides\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873772 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fl6pr\" (UniqueName: \"kubernetes.io/projected/e5983de4-b9c0-4e89-8ee9-159125653050-kube-api-access-fl6pr\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873792 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-log-socket\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873826 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e5983de4-b9c0-4e89-8ee9-159125653050-ovnkube-script-lib\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873868 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e5983de4-b9c0-4e89-8ee9-159125653050-ovn-node-metrics-cert\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873889 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873924 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-cni-netd\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873944 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-node-log\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.874030 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-node-log\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.873086 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-cni-bin\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.874093 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-systemd-units\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.874120 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-kubelet\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.874143 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-run-ovn\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.874166 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-run-netns\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.874191 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-slash\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875012 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e5983de4-b9c0-4e89-8ee9-159125653050-env-overrides\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875061 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e5983de4-b9c0-4e89-8ee9-159125653050-ovnkube-config\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875088 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-etc-openvswitch\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875124 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-run-systemd\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875168 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-log-socket\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875235 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-run-openvswitch\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875269 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-var-lib-openvswitch\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875294 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-run-ovn-kubernetes\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875327 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875358 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5983de4-b9c0-4e89-8ee9-159125653050-host-cni-netd\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.875776 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e5983de4-b9c0-4e89-8ee9-159125653050-ovnkube-script-lib\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.890256 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e5983de4-b9c0-4e89-8ee9-159125653050-ovn-node-metrics-cert\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.898456 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl6pr\" (UniqueName: \"kubernetes.io/projected/e5983de4-b9c0-4e89-8ee9-159125653050-kube-api-access-fl6pr\") pod \"ovnkube-node-2n54k\" (UID: \"e5983de4-b9c0-4e89-8ee9-159125653050\") " pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:03 crc kubenswrapper[5108]: I0104 00:22:03.981828 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:04 crc kubenswrapper[5108]: W0104 00:22:04.002030 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5983de4_b9c0_4e89_8ee9_159125653050.slice/crio-7ce7aa3dce09ab159ef5cd2afa99273e639ef52243b50cd0a8e3842e5928f73a WatchSource:0}: Error finding container 7ce7aa3dce09ab159ef5cd2afa99273e639ef52243b50cd0a8e3842e5928f73a: Status 404 returned error can't find the container with id 7ce7aa3dce09ab159ef5cd2afa99273e639ef52243b50cd0a8e3842e5928f73a Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.231609 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.231821 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rzs5n" event={"ID":"8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23","Type":"ContainerStarted","Data":"ffcffb2b94591931f0eeae4876c99d393c867c3c8455ad4c4f70720479e688e5"} Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.236392 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nhl4w_20d6d69a-45c2-4c35-8a5d-22d3815de8e5/ovn-acl-logging/0.log" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.236837 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nhl4w_20d6d69a-45c2-4c35-8a5d-22d3815de8e5/ovn-controller/0.log" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.237157 5108 generic.go:358] "Generic (PLEG): container finished" podID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" containerID="ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf" exitCode=0 Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.237298 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf"} Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.237334 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" event={"ID":"20d6d69a-45c2-4c35-8a5d-22d3815de8e5","Type":"ContainerDied","Data":"0d9c5a8de15df6caaa824945872985cdd809b7a69873fa20ec6d08fedd59af7e"} Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.237357 5108 scope.go:117] "RemoveContainer" containerID="4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.237454 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nhl4w" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.240273 5108 generic.go:358] "Generic (PLEG): container finished" podID="e5983de4-b9c0-4e89-8ee9-159125653050" containerID="b1053daa277b411d39bddc8f097c655813f20e0be7f8dc0710cf7fb5e5960089" exitCode=0 Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.240385 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerDied","Data":"b1053daa277b411d39bddc8f097c655813f20e0be7f8dc0710cf7fb5e5960089"} Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.240410 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerStarted","Data":"7ce7aa3dce09ab159ef5cd2afa99273e639ef52243b50cd0a8e3842e5928f73a"} Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.243517 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" event={"ID":"9f087406-1da8-4fcf-8808-a54498e8d36c","Type":"ContainerStarted","Data":"7a78ef3fb826292192f417d6cef60b7b3bbe5a75d393a0f29eeaebdf9b1da9b2"} Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.243566 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" event={"ID":"9f087406-1da8-4fcf-8808-a54498e8d36c","Type":"ContainerStarted","Data":"74b3eba509ec3a8f522e56abf02584adbe8a357917685c492b83894919419c0e"} Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.243581 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" event={"ID":"9f087406-1da8-4fcf-8808-a54498e8d36c","Type":"ContainerStarted","Data":"ef55370e9e0696d3562f95b0ece6c6a520e18209f85b795f5b275b62aa82d519"} Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.269098 5108 scope.go:117] "RemoveContainer" containerID="b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.282755 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nhl4w"] Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.288230 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nhl4w"] Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.302436 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pz6jj" podStartSLOduration=2.302414356 podStartE2EDuration="2.302414356s" podCreationTimestamp="2026-01-04 00:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:22:04.299964977 +0000 UTC m=+698.288530083" watchObservedRunningTime="2026-01-04 00:22:04.302414356 +0000 UTC m=+698.290979442" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.345762 5108 scope.go:117] "RemoveContainer" containerID="2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.374755 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458102-msftx" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.384854 5108 scope.go:117] "RemoveContainer" containerID="ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.405218 5108 scope.go:117] "RemoveContainer" containerID="9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.423388 5108 scope.go:117] "RemoveContainer" containerID="71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.439801 5108 scope.go:117] "RemoveContainer" containerID="44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.459145 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d6d69a-45c2-4c35-8a5d-22d3815de8e5" path="/var/lib/kubelet/pods/20d6d69a-45c2-4c35-8a5d-22d3815de8e5/volumes" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.459942 5108 scope.go:117] "RemoveContainer" containerID="07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.460640 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6" path="/var/lib/kubelet/pods/2c95d1a3-7d43-48b4-afe6-dd3bf3b87dc6/volumes" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.479509 5108 scope.go:117] "RemoveContainer" containerID="f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.487906 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gj6q\" (UniqueName: \"kubernetes.io/projected/d23c37b5-6c23-48f9-960a-a9c174d8430c-kube-api-access-9gj6q\") pod \"d23c37b5-6c23-48f9-960a-a9c174d8430c\" (UID: \"d23c37b5-6c23-48f9-960a-a9c174d8430c\") " Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.492709 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d23c37b5-6c23-48f9-960a-a9c174d8430c-kube-api-access-9gj6q" (OuterVolumeSpecName: "kube-api-access-9gj6q") pod "d23c37b5-6c23-48f9-960a-a9c174d8430c" (UID: "d23c37b5-6c23-48f9-960a-a9c174d8430c"). InnerVolumeSpecName "kube-api-access-9gj6q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.499709 5108 scope.go:117] "RemoveContainer" containerID="4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981" Jan 04 00:22:04 crc kubenswrapper[5108]: E0104 00:22:04.500337 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981\": container with ID starting with 4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981 not found: ID does not exist" containerID="4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.500396 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981"} err="failed to get container status \"4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981\": rpc error: code = NotFound desc = could not find container \"4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981\": container with ID starting with 4debfb6392bc1bc3f892ac0820a0cac382eee9fe3e7c3376c06c41d8b5f0c981 not found: ID does not exist" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.500426 5108 scope.go:117] "RemoveContainer" containerID="b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d" Jan 04 00:22:04 crc kubenswrapper[5108]: E0104 00:22:04.500778 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d\": container with ID starting with b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d not found: ID does not exist" containerID="b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.500802 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d"} err="failed to get container status \"b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d\": rpc error: code = NotFound desc = could not find container \"b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d\": container with ID starting with b63ae9033e496d2a17ee91a45e474e8e1a42c4d995a69d760d7187a8cf59aa2d not found: ID does not exist" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.500819 5108 scope.go:117] "RemoveContainer" containerID="2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a" Jan 04 00:22:04 crc kubenswrapper[5108]: E0104 00:22:04.501121 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a\": container with ID starting with 2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a not found: ID does not exist" containerID="2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.501180 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a"} err="failed to get container status \"2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a\": rpc error: code = NotFound desc = could not find container \"2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a\": container with ID starting with 2c106b68a27f251b0aa323e664dbc162c47f77a095870675163fc1c7f76ab87a not found: ID does not exist" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.501238 5108 scope.go:117] "RemoveContainer" containerID="ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf" Jan 04 00:22:04 crc kubenswrapper[5108]: E0104 00:22:04.502015 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf\": container with ID starting with ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf not found: ID does not exist" containerID="ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.502088 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf"} err="failed to get container status \"ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf\": rpc error: code = NotFound desc = could not find container \"ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf\": container with ID starting with ceb6c895f063e34fecf7a80e91ded0aba6095b63274ff38158805a59e6edfdcf not found: ID does not exist" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.502147 5108 scope.go:117] "RemoveContainer" containerID="9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428" Jan 04 00:22:04 crc kubenswrapper[5108]: E0104 00:22:04.502727 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428\": container with ID starting with 9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428 not found: ID does not exist" containerID="9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.502769 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428"} err="failed to get container status \"9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428\": rpc error: code = NotFound desc = could not find container \"9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428\": container with ID starting with 9ca02fd651dcad92f4572ec7f186527a8984074514c99f8dc8723a14f0bb5428 not found: ID does not exist" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.502791 5108 scope.go:117] "RemoveContainer" containerID="71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b" Jan 04 00:22:04 crc kubenswrapper[5108]: E0104 00:22:04.503175 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b\": container with ID starting with 71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b not found: ID does not exist" containerID="71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.503223 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b"} err="failed to get container status \"71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b\": rpc error: code = NotFound desc = could not find container \"71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b\": container with ID starting with 71bb346536e06ba0117423a8b6637180393256b79d6aeb8295eb95f0866da85b not found: ID does not exist" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.503237 5108 scope.go:117] "RemoveContainer" containerID="44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867" Jan 04 00:22:04 crc kubenswrapper[5108]: E0104 00:22:04.503491 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867\": container with ID starting with 44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867 not found: ID does not exist" containerID="44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.503543 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867"} err="failed to get container status \"44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867\": rpc error: code = NotFound desc = could not find container \"44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867\": container with ID starting with 44faeb57fa086d65419836ca35d54febc9a5fbdca1cd7c4f65aceecd1577f867 not found: ID does not exist" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.503560 5108 scope.go:117] "RemoveContainer" containerID="07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d" Jan 04 00:22:04 crc kubenswrapper[5108]: E0104 00:22:04.503769 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d\": container with ID starting with 07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d not found: ID does not exist" containerID="07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.503816 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d"} err="failed to get container status \"07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d\": rpc error: code = NotFound desc = could not find container \"07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d\": container with ID starting with 07a9dbf38baca9c5b3fbe3dde40d4a145aa21599df789024c5de598cc56ae61d not found: ID does not exist" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.503829 5108 scope.go:117] "RemoveContainer" containerID="f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318" Jan 04 00:22:04 crc kubenswrapper[5108]: E0104 00:22:04.504096 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318\": container with ID starting with f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318 not found: ID does not exist" containerID="f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.504117 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318"} err="failed to get container status \"f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318\": rpc error: code = NotFound desc = could not find container \"f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318\": container with ID starting with f22b382b86afe25cecc9a71e51f0b968f7a21c0676745ebc7343989517586318 not found: ID does not exist" Jan 04 00:22:04 crc kubenswrapper[5108]: I0104 00:22:04.589579 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9gj6q\" (UniqueName: \"kubernetes.io/projected/d23c37b5-6c23-48f9-960a-a9c174d8430c-kube-api-access-9gj6q\") on node \"crc\" DevicePath \"\"" Jan 04 00:22:05 crc kubenswrapper[5108]: I0104 00:22:05.255390 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458102-msftx" event={"ID":"d23c37b5-6c23-48f9-960a-a9c174d8430c","Type":"ContainerDied","Data":"e1d90542555259ab029674f0eb22f8738bdc2874f4c0ddcaaad6e761e5752363"} Jan 04 00:22:05 crc kubenswrapper[5108]: I0104 00:22:05.255842 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1d90542555259ab029674f0eb22f8738bdc2874f4c0ddcaaad6e761e5752363" Jan 04 00:22:05 crc kubenswrapper[5108]: I0104 00:22:05.255453 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458102-msftx" Jan 04 00:22:05 crc kubenswrapper[5108]: I0104 00:22:05.262893 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerStarted","Data":"6539a2781c6a860a81671a2ea46ca97dd81dccd0a7a2e5799ee1e3d131fc840c"} Jan 04 00:22:05 crc kubenswrapper[5108]: I0104 00:22:05.262976 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerStarted","Data":"13fd8fefd26204a954353aefafb75a79d5ed9966005b3dcaa80060e6724e40b9"} Jan 04 00:22:05 crc kubenswrapper[5108]: I0104 00:22:05.262993 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerStarted","Data":"f21dc7461e8f5b02639d6c2f1dad1268018e6e130e3c06480b8d5cd3ee819261"} Jan 04 00:22:05 crc kubenswrapper[5108]: I0104 00:22:05.263022 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerStarted","Data":"6124a7ae8ae538390473d0ab5eaa9af827adbfc067fc4dab6d57971f7d638ecf"} Jan 04 00:22:06 crc kubenswrapper[5108]: I0104 00:22:06.272786 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerStarted","Data":"590a5edd7be53329796c0ef368ba2f1d88dae3bde053d38ab96e654dba4bc843"} Jan 04 00:22:07 crc kubenswrapper[5108]: I0104 00:22:07.282832 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerStarted","Data":"52cb61381b85f57f473a650b95f6dd1f02aad4d1bef24c1cb9aefe7ae8e28a4a"} Jan 04 00:22:09 crc kubenswrapper[5108]: I0104 00:22:09.297417 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerStarted","Data":"ae93c83177b746d19a12b3c6fc7ccf96420ad9906741e9e6d054fabb99b3b15f"} Jan 04 00:22:12 crc kubenswrapper[5108]: I0104 00:22:12.324279 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" event={"ID":"e5983de4-b9c0-4e89-8ee9-159125653050","Type":"ContainerStarted","Data":"40e7b9a650da9f7c54120d5d8feef0f99fe1af8fea6524532a5622500a7d72ad"} Jan 04 00:22:12 crc kubenswrapper[5108]: I0104 00:22:12.325459 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:12 crc kubenswrapper[5108]: I0104 00:22:12.325483 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:12 crc kubenswrapper[5108]: I0104 00:22:12.365378 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" podStartSLOduration=9.365339959 podStartE2EDuration="9.365339959s" podCreationTimestamp="2026-01-04 00:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:22:12.362019445 +0000 UTC m=+706.350584531" watchObservedRunningTime="2026-01-04 00:22:12.365339959 +0000 UTC m=+706.353905045" Jan 04 00:22:12 crc kubenswrapper[5108]: I0104 00:22:12.366543 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:13 crc kubenswrapper[5108]: I0104 00:22:13.331697 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:13 crc kubenswrapper[5108]: I0104 00:22:13.366141 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:24 crc kubenswrapper[5108]: I0104 00:22:24.917604 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:22:24 crc kubenswrapper[5108]: I0104 00:22:24.918778 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:22:24 crc kubenswrapper[5108]: I0104 00:22:24.918853 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:22:24 crc kubenswrapper[5108]: I0104 00:22:24.919825 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"335e8dafd09ef6d4b5814847b54a00f48c49785e811fdaed2b4bdcd55dc20429"} pod="openshift-machine-config-operator/machine-config-daemon-njl5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 04 00:22:24 crc kubenswrapper[5108]: I0104 00:22:24.919896 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" containerID="cri-o://335e8dafd09ef6d4b5814847b54a00f48c49785e811fdaed2b4bdcd55dc20429" gracePeriod=600 Jan 04 00:22:25 crc kubenswrapper[5108]: I0104 00:22:25.417991 5108 generic.go:358] "Generic (PLEG): container finished" podID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerID="335e8dafd09ef6d4b5814847b54a00f48c49785e811fdaed2b4bdcd55dc20429" exitCode=0 Jan 04 00:22:25 crc kubenswrapper[5108]: I0104 00:22:25.418048 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerDied","Data":"335e8dafd09ef6d4b5814847b54a00f48c49785e811fdaed2b4bdcd55dc20429"} Jan 04 00:22:25 crc kubenswrapper[5108]: I0104 00:22:25.418622 5108 scope.go:117] "RemoveContainer" containerID="98c0ce6db2062cf99e2e7a19595c98fef731421d446df51d11c001f56a4c3cd2" Jan 04 00:22:26 crc kubenswrapper[5108]: I0104 00:22:26.428778 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"c8dc27842f4ece5439b06d6ce112671ad3f7bc8894f51d9a8d835c365dc97f45"} Jan 04 00:22:45 crc kubenswrapper[5108]: I0104 00:22:45.372771 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2n54k" Jan 04 00:22:55 crc kubenswrapper[5108]: I0104 00:22:55.888446 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rhwb4"] Jan 04 00:22:55 crc kubenswrapper[5108]: I0104 00:22:55.891109 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d23c37b5-6c23-48f9-960a-a9c174d8430c" containerName="oc" Jan 04 00:22:55 crc kubenswrapper[5108]: I0104 00:22:55.891129 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23c37b5-6c23-48f9-960a-a9c174d8430c" containerName="oc" Jan 04 00:22:55 crc kubenswrapper[5108]: I0104 00:22:55.891318 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d23c37b5-6c23-48f9-960a-a9c174d8430c" containerName="oc" Jan 04 00:22:55 crc kubenswrapper[5108]: I0104 00:22:55.901013 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:55 crc kubenswrapper[5108]: I0104 00:22:55.904369 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rhwb4"] Jan 04 00:22:55 crc kubenswrapper[5108]: I0104 00:22:55.973125 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-catalog-content\") pod \"community-operators-rhwb4\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:55 crc kubenswrapper[5108]: I0104 00:22:55.973261 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4shz\" (UniqueName: \"kubernetes.io/projected/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-kube-api-access-b4shz\") pod \"community-operators-rhwb4\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:55 crc kubenswrapper[5108]: I0104 00:22:55.973299 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-utilities\") pod \"community-operators-rhwb4\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:56 crc kubenswrapper[5108]: I0104 00:22:56.075180 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-catalog-content\") pod \"community-operators-rhwb4\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:56 crc kubenswrapper[5108]: I0104 00:22:56.075283 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b4shz\" (UniqueName: \"kubernetes.io/projected/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-kube-api-access-b4shz\") pod \"community-operators-rhwb4\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:56 crc kubenswrapper[5108]: I0104 00:22:56.075340 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-utilities\") pod \"community-operators-rhwb4\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:56 crc kubenswrapper[5108]: I0104 00:22:56.075865 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-catalog-content\") pod \"community-operators-rhwb4\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:56 crc kubenswrapper[5108]: I0104 00:22:56.076079 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-utilities\") pod \"community-operators-rhwb4\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:56 crc kubenswrapper[5108]: I0104 00:22:56.102193 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4shz\" (UniqueName: \"kubernetes.io/projected/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-kube-api-access-b4shz\") pod \"community-operators-rhwb4\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:56 crc kubenswrapper[5108]: I0104 00:22:56.228145 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:22:56 crc kubenswrapper[5108]: I0104 00:22:56.742628 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rhwb4"] Jan 04 00:22:57 crc kubenswrapper[5108]: I0104 00:22:57.650449 5108 generic.go:358] "Generic (PLEG): container finished" podID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerID="f5d1691770f63ef1ad58f03c2c00ffbac8b4776b50aaddb65a37ffc81b306ff5" exitCode=0 Jan 04 00:22:57 crc kubenswrapper[5108]: I0104 00:22:57.650542 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhwb4" event={"ID":"0c80a1fb-1ebd-445d-83f3-5ded0620b07c","Type":"ContainerDied","Data":"f5d1691770f63ef1ad58f03c2c00ffbac8b4776b50aaddb65a37ffc81b306ff5"} Jan 04 00:22:57 crc kubenswrapper[5108]: I0104 00:22:57.651326 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhwb4" event={"ID":"0c80a1fb-1ebd-445d-83f3-5ded0620b07c","Type":"ContainerStarted","Data":"dd65fcc67e4b85318b2aa677c6a5d763a7df178a2b89243d9791cd2d63d8323c"} Jan 04 00:22:59 crc kubenswrapper[5108]: I0104 00:22:59.668958 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhwb4" event={"ID":"0c80a1fb-1ebd-445d-83f3-5ded0620b07c","Type":"ContainerStarted","Data":"39f992438ba9c77f299c8db5b09aed6bf13183fbbe06b5e4f4e53ef87878afc4"} Jan 04 00:23:00 crc kubenswrapper[5108]: I0104 00:23:00.679410 5108 generic.go:358] "Generic (PLEG): container finished" podID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerID="39f992438ba9c77f299c8db5b09aed6bf13183fbbe06b5e4f4e53ef87878afc4" exitCode=0 Jan 04 00:23:00 crc kubenswrapper[5108]: I0104 00:23:00.679604 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhwb4" event={"ID":"0c80a1fb-1ebd-445d-83f3-5ded0620b07c","Type":"ContainerDied","Data":"39f992438ba9c77f299c8db5b09aed6bf13183fbbe06b5e4f4e53ef87878afc4"} Jan 04 00:23:02 crc kubenswrapper[5108]: I0104 00:23:02.696925 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhwb4" event={"ID":"0c80a1fb-1ebd-445d-83f3-5ded0620b07c","Type":"ContainerStarted","Data":"ca8357eab86483cb33c5ce3e80ba8c5610eab7e73c8eb7d4910fd5000a8c8a29"} Jan 04 00:23:06 crc kubenswrapper[5108]: I0104 00:23:06.229159 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:23:06 crc kubenswrapper[5108]: I0104 00:23:06.229679 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:23:06 crc kubenswrapper[5108]: I0104 00:23:06.291685 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:23:06 crc kubenswrapper[5108]: I0104 00:23:06.328113 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rhwb4" podStartSLOduration=9.810669022999999 podStartE2EDuration="11.328078567s" podCreationTimestamp="2026-01-04 00:22:55 +0000 UTC" firstStartedPulling="2026-01-04 00:22:57.651721726 +0000 UTC m=+751.640286822" lastFinishedPulling="2026-01-04 00:22:59.16913128 +0000 UTC m=+753.157696366" observedRunningTime="2026-01-04 00:23:02.723625745 +0000 UTC m=+756.712190841" watchObservedRunningTime="2026-01-04 00:23:06.328078567 +0000 UTC m=+760.316643703" Jan 04 00:23:06 crc kubenswrapper[5108]: I0104 00:23:06.771573 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:23:06 crc kubenswrapper[5108]: I0104 00:23:06.823705 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rhwb4"] Jan 04 00:23:08 crc kubenswrapper[5108]: I0104 00:23:08.741631 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rhwb4" podUID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerName="registry-server" containerID="cri-o://ca8357eab86483cb33c5ce3e80ba8c5610eab7e73c8eb7d4910fd5000a8c8a29" gracePeriod=2 Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.750917 5108 generic.go:358] "Generic (PLEG): container finished" podID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerID="ca8357eab86483cb33c5ce3e80ba8c5610eab7e73c8eb7d4910fd5000a8c8a29" exitCode=0 Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.751003 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhwb4" event={"ID":"0c80a1fb-1ebd-445d-83f3-5ded0620b07c","Type":"ContainerDied","Data":"ca8357eab86483cb33c5ce3e80ba8c5610eab7e73c8eb7d4910fd5000a8c8a29"} Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.751586 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhwb4" event={"ID":"0c80a1fb-1ebd-445d-83f3-5ded0620b07c","Type":"ContainerDied","Data":"dd65fcc67e4b85318b2aa677c6a5d763a7df178a2b89243d9791cd2d63d8323c"} Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.751602 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd65fcc67e4b85318b2aa677c6a5d763a7df178a2b89243d9791cd2d63d8323c" Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.781293 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.886406 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-utilities\") pod \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.886482 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-catalog-content\") pod \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.886648 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4shz\" (UniqueName: \"kubernetes.io/projected/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-kube-api-access-b4shz\") pod \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\" (UID: \"0c80a1fb-1ebd-445d-83f3-5ded0620b07c\") " Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.887810 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-utilities" (OuterVolumeSpecName: "utilities") pod "0c80a1fb-1ebd-445d-83f3-5ded0620b07c" (UID: "0c80a1fb-1ebd-445d-83f3-5ded0620b07c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.894792 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-kube-api-access-b4shz" (OuterVolumeSpecName: "kube-api-access-b4shz") pod "0c80a1fb-1ebd-445d-83f3-5ded0620b07c" (UID: "0c80a1fb-1ebd-445d-83f3-5ded0620b07c"). InnerVolumeSpecName "kube-api-access-b4shz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.937827 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c80a1fb-1ebd-445d-83f3-5ded0620b07c" (UID: "0c80a1fb-1ebd-445d-83f3-5ded0620b07c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.988542 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.988595 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:09 crc kubenswrapper[5108]: I0104 00:23:09.988609 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b4shz\" (UniqueName: \"kubernetes.io/projected/0c80a1fb-1ebd-445d-83f3-5ded0620b07c-kube-api-access-b4shz\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:10 crc kubenswrapper[5108]: I0104 00:23:10.758023 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhwb4" Jan 04 00:23:10 crc kubenswrapper[5108]: I0104 00:23:10.793999 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rhwb4"] Jan 04 00:23:10 crc kubenswrapper[5108]: I0104 00:23:10.804323 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rhwb4"] Jan 04 00:23:12 crc kubenswrapper[5108]: I0104 00:23:12.457823 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" path="/var/lib/kubelet/pods/0c80a1fb-1ebd-445d-83f3-5ded0620b07c/volumes" Jan 04 00:23:18 crc kubenswrapper[5108]: I0104 00:23:18.651944 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbq58"] Jan 04 00:23:18 crc kubenswrapper[5108]: I0104 00:23:18.653222 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zbq58" podUID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerName="registry-server" containerID="cri-o://7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0" gracePeriod=30 Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.502254 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.627282 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-catalog-content\") pod \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.627703 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5sjg\" (UniqueName: \"kubernetes.io/projected/3f0916ca-f3c6-4a23-add3-1dcede582a7e-kube-api-access-t5sjg\") pod \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.627761 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-utilities\") pod \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\" (UID: \"3f0916ca-f3c6-4a23-add3-1dcede582a7e\") " Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.629278 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-utilities" (OuterVolumeSpecName: "utilities") pod "3f0916ca-f3c6-4a23-add3-1dcede582a7e" (UID: "3f0916ca-f3c6-4a23-add3-1dcede582a7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.635674 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0916ca-f3c6-4a23-add3-1dcede582a7e-kube-api-access-t5sjg" (OuterVolumeSpecName: "kube-api-access-t5sjg") pod "3f0916ca-f3c6-4a23-add3-1dcede582a7e" (UID: "3f0916ca-f3c6-4a23-add3-1dcede582a7e"). InnerVolumeSpecName "kube-api-access-t5sjg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.661324 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f0916ca-f3c6-4a23-add3-1dcede582a7e" (UID: "3f0916ca-f3c6-4a23-add3-1dcede582a7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.729793 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.729840 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t5sjg\" (UniqueName: \"kubernetes.io/projected/3f0916ca-f3c6-4a23-add3-1dcede582a7e-kube-api-access-t5sjg\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.729855 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0916ca-f3c6-4a23-add3-1dcede582a7e-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.846147 5108 generic.go:358] "Generic (PLEG): container finished" podID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerID="7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0" exitCode=0 Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.846604 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbq58" event={"ID":"3f0916ca-f3c6-4a23-add3-1dcede582a7e","Type":"ContainerDied","Data":"7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0"} Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.846680 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zbq58" event={"ID":"3f0916ca-f3c6-4a23-add3-1dcede582a7e","Type":"ContainerDied","Data":"9a299d62b9afbd3dc4919e677d215d30ab8f0e02cb33423c56fa133b3441cea8"} Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.846708 5108 scope.go:117] "RemoveContainer" containerID="7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.846781 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zbq58" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.871973 5108 scope.go:117] "RemoveContainer" containerID="ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.890795 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbq58"] Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.894201 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zbq58"] Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.905795 5108 scope.go:117] "RemoveContainer" containerID="36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.924683 5108 scope.go:117] "RemoveContainer" containerID="7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0" Jan 04 00:23:19 crc kubenswrapper[5108]: E0104 00:23:19.925099 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0\": container with ID starting with 7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0 not found: ID does not exist" containerID="7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.925155 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0"} err="failed to get container status \"7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0\": rpc error: code = NotFound desc = could not find container \"7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0\": container with ID starting with 7c6f90b1e08d1b9dc634e1005a66a89e3ecd98de1364c1e3164e46ae49ed64a0 not found: ID does not exist" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.925179 5108 scope.go:117] "RemoveContainer" containerID="ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01" Jan 04 00:23:19 crc kubenswrapper[5108]: E0104 00:23:19.925924 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01\": container with ID starting with ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01 not found: ID does not exist" containerID="ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.925952 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01"} err="failed to get container status \"ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01\": rpc error: code = NotFound desc = could not find container \"ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01\": container with ID starting with ca99446bc276f4e6fd74cce2929d595175cab5572da3c8399a166f7b370bdb01 not found: ID does not exist" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.925970 5108 scope.go:117] "RemoveContainer" containerID="36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824" Jan 04 00:23:19 crc kubenswrapper[5108]: E0104 00:23:19.926748 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824\": container with ID starting with 36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824 not found: ID does not exist" containerID="36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824" Jan 04 00:23:19 crc kubenswrapper[5108]: I0104 00:23:19.926802 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824"} err="failed to get container status \"36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824\": rpc error: code = NotFound desc = could not find container \"36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824\": container with ID starting with 36fb5f19075f4416c0a5b5851c29a86fff076a68235cffd7931476441ce5a824 not found: ID does not exist" Jan 04 00:23:20 crc kubenswrapper[5108]: I0104 00:23:20.457499 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" path="/var/lib/kubelet/pods/3f0916ca-f3c6-4a23-add3-1dcede582a7e/volumes" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.492369 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k"] Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493638 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerName="registry-server" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493658 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerName="registry-server" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493687 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerName="extract-utilities" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493695 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerName="extract-utilities" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493712 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerName="extract-utilities" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493722 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerName="extract-utilities" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493736 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerName="extract-content" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493744 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerName="extract-content" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493755 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerName="extract-content" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493762 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerName="extract-content" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493772 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerName="registry-server" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493779 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerName="registry-server" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493900 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f0916ca-f3c6-4a23-add3-1dcede582a7e" containerName="registry-server" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.493918 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c80a1fb-1ebd-445d-83f3-5ded0620b07c" containerName="registry-server" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.511125 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k"] Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.511373 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.518032 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.671709 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.671780 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmgbn\" (UniqueName: \"kubernetes.io/projected/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-kube-api-access-gmgbn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.671819 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.773029 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.773543 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gmgbn\" (UniqueName: \"kubernetes.io/projected/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-kube-api-access-gmgbn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.773688 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.773809 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.774037 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.802437 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmgbn\" (UniqueName: \"kubernetes.io/projected/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-kube-api-access-gmgbn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:22 crc kubenswrapper[5108]: I0104 00:23:22.831721 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:23 crc kubenswrapper[5108]: I0104 00:23:23.058941 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k"] Jan 04 00:23:23 crc kubenswrapper[5108]: I0104 00:23:23.875505 5108 generic.go:358] "Generic (PLEG): container finished" podID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerID="f9ade2c9d1359a38c592029f4476707e42b28aabb8e4d3012c1b69e0433bdff0" exitCode=0 Jan 04 00:23:23 crc kubenswrapper[5108]: I0104 00:23:23.875573 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" event={"ID":"fb4c7df0-1c9a-427b-821a-2efffa9a2a75","Type":"ContainerDied","Data":"f9ade2c9d1359a38c592029f4476707e42b28aabb8e4d3012c1b69e0433bdff0"} Jan 04 00:23:23 crc kubenswrapper[5108]: I0104 00:23:23.876183 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" event={"ID":"fb4c7df0-1c9a-427b-821a-2efffa9a2a75","Type":"ContainerStarted","Data":"c9c937a07fc2eaca222d96b50d51d1970574eaacbfe3775a8774b1c987235f14"} Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.244005 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mw5wb"] Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.386791 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mw5wb"] Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.387022 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.517740 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-utilities\") pod \"redhat-operators-mw5wb\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.517848 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qv78\" (UniqueName: \"kubernetes.io/projected/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-kube-api-access-5qv78\") pod \"redhat-operators-mw5wb\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.517872 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-catalog-content\") pod \"redhat-operators-mw5wb\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.620004 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-utilities\") pod \"redhat-operators-mw5wb\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.620598 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-utilities\") pod \"redhat-operators-mw5wb\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.621780 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5qv78\" (UniqueName: \"kubernetes.io/projected/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-kube-api-access-5qv78\") pod \"redhat-operators-mw5wb\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.621882 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-catalog-content\") pod \"redhat-operators-mw5wb\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.622484 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-catalog-content\") pod \"redhat-operators-mw5wb\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.648258 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qv78\" (UniqueName: \"kubernetes.io/projected/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-kube-api-access-5qv78\") pod \"redhat-operators-mw5wb\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.708403 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:25 crc kubenswrapper[5108]: I0104 00:23:25.893147 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" event={"ID":"fb4c7df0-1c9a-427b-821a-2efffa9a2a75","Type":"ContainerStarted","Data":"0bbd3265394dd2bd4166e877e689041ba52a20bf13f9f7374bcd695fac438d13"} Jan 04 00:23:26 crc kubenswrapper[5108]: I0104 00:23:26.010705 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mw5wb"] Jan 04 00:23:26 crc kubenswrapper[5108]: I0104 00:23:26.902182 5108 generic.go:358] "Generic (PLEG): container finished" podID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerID="32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050" exitCode=0 Jan 04 00:23:26 crc kubenswrapper[5108]: I0104 00:23:26.902269 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw5wb" event={"ID":"c08c9c29-bbd0-47ae-8449-0a08a5a97f86","Type":"ContainerDied","Data":"32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050"} Jan 04 00:23:26 crc kubenswrapper[5108]: I0104 00:23:26.902316 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw5wb" event={"ID":"c08c9c29-bbd0-47ae-8449-0a08a5a97f86","Type":"ContainerStarted","Data":"730cbbdcc1bc630671b9910c1fb39a6f4d488b2bcccd1e03adfcf7fd20d5ebae"} Jan 04 00:23:26 crc kubenswrapper[5108]: I0104 00:23:26.906692 5108 generic.go:358] "Generic (PLEG): container finished" podID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerID="0bbd3265394dd2bd4166e877e689041ba52a20bf13f9f7374bcd695fac438d13" exitCode=0 Jan 04 00:23:26 crc kubenswrapper[5108]: I0104 00:23:26.906788 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" event={"ID":"fb4c7df0-1c9a-427b-821a-2efffa9a2a75","Type":"ContainerDied","Data":"0bbd3265394dd2bd4166e877e689041ba52a20bf13f9f7374bcd695fac438d13"} Jan 04 00:23:27 crc kubenswrapper[5108]: I0104 00:23:27.920257 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw5wb" event={"ID":"c08c9c29-bbd0-47ae-8449-0a08a5a97f86","Type":"ContainerStarted","Data":"48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0"} Jan 04 00:23:27 crc kubenswrapper[5108]: I0104 00:23:27.925179 5108 generic.go:358] "Generic (PLEG): container finished" podID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerID="731efaf6d03241bf768567ba4019f85421624dbc1b2d80ae253e2849c79dcba2" exitCode=0 Jan 04 00:23:27 crc kubenswrapper[5108]: I0104 00:23:27.925459 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" event={"ID":"fb4c7df0-1c9a-427b-821a-2efffa9a2a75","Type":"ContainerDied","Data":"731efaf6d03241bf768567ba4019f85421624dbc1b2d80ae253e2849c79dcba2"} Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.689964 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.733840 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmgbn\" (UniqueName: \"kubernetes.io/projected/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-kube-api-access-gmgbn\") pod \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.733901 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-bundle\") pod \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.733973 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-util\") pod \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\" (UID: \"fb4c7df0-1c9a-427b-821a-2efffa9a2a75\") " Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.736538 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-bundle" (OuterVolumeSpecName: "bundle") pod "fb4c7df0-1c9a-427b-821a-2efffa9a2a75" (UID: "fb4c7df0-1c9a-427b-821a-2efffa9a2a75"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.741372 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-kube-api-access-gmgbn" (OuterVolumeSpecName: "kube-api-access-gmgbn") pod "fb4c7df0-1c9a-427b-821a-2efffa9a2a75" (UID: "fb4c7df0-1c9a-427b-821a-2efffa9a2a75"). InnerVolumeSpecName "kube-api-access-gmgbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.752939 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-util" (OuterVolumeSpecName: "util") pod "fb4c7df0-1c9a-427b-821a-2efffa9a2a75" (UID: "fb4c7df0-1c9a-427b-821a-2efffa9a2a75"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.835082 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmgbn\" (UniqueName: \"kubernetes.io/projected/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-kube-api-access-gmgbn\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.835473 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:29 crc kubenswrapper[5108]: I0104 00:23:29.835568 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb4c7df0-1c9a-427b-821a-2efffa9a2a75-util\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.010460 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" event={"ID":"fb4c7df0-1c9a-427b-821a-2efffa9a2a75","Type":"ContainerDied","Data":"c9c937a07fc2eaca222d96b50d51d1970574eaacbfe3775a8774b1c987235f14"} Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.010559 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c937a07fc2eaca222d96b50d51d1970574eaacbfe3775a8774b1c987235f14" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.010501 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.691558 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw"] Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.692837 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerName="pull" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.692856 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerName="pull" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.692876 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerName="extract" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.692883 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerName="extract" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.692899 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerName="util" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.692905 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerName="util" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.693037 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb4c7df0-1c9a-427b-821a-2efffa9a2a75" containerName="extract" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.699273 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.703378 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.705890 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw"] Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.810848 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.811066 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g28st\" (UniqueName: \"kubernetes.io/projected/6ed7bb44-e54e-4477-a030-1b100090455f-kube-api-access-g28st\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.811541 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.913661 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g28st\" (UniqueName: \"kubernetes.io/projected/6ed7bb44-e54e-4477-a030-1b100090455f-kube-api-access-g28st\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.913886 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.913928 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.914648 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.914731 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:30 crc kubenswrapper[5108]: I0104 00:23:30.936499 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g28st\" (UniqueName: \"kubernetes.io/projected/6ed7bb44-e54e-4477-a030-1b100090455f-kube-api-access-g28st\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.029056 5108 generic.go:358] "Generic (PLEG): container finished" podID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerID="48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0" exitCode=0 Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.029351 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw5wb" event={"ID":"c08c9c29-bbd0-47ae-8449-0a08a5a97f86","Type":"ContainerDied","Data":"48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0"} Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.084676 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.684974 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t"] Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.717351 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t"] Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.717609 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.806472 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.806663 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x498\" (UniqueName: \"kubernetes.io/projected/bfc83f87-93c5-4a13-9807-1f22d71c0214-kube-api-access-2x498\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.806938 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.908589 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.908669 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2x498\" (UniqueName: \"kubernetes.io/projected/bfc83f87-93c5-4a13-9807-1f22d71c0214-kube-api-access-2x498\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.908735 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.909679 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:31 crc kubenswrapper[5108]: I0104 00:23:31.909732 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:32 crc kubenswrapper[5108]: I0104 00:23:32.003564 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x498\" (UniqueName: \"kubernetes.io/projected/bfc83f87-93c5-4a13-9807-1f22d71c0214-kube-api-access-2x498\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:32 crc kubenswrapper[5108]: I0104 00:23:32.038387 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw5wb" event={"ID":"c08c9c29-bbd0-47ae-8449-0a08a5a97f86","Type":"ContainerStarted","Data":"ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940"} Jan 04 00:23:32 crc kubenswrapper[5108]: I0104 00:23:32.073441 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:32 crc kubenswrapper[5108]: I0104 00:23:32.573558 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mw5wb" podStartSLOduration=6.808529588 podStartE2EDuration="7.573530176s" podCreationTimestamp="2026-01-04 00:23:25 +0000 UTC" firstStartedPulling="2026-01-04 00:23:26.903108692 +0000 UTC m=+780.891673778" lastFinishedPulling="2026-01-04 00:23:27.66810928 +0000 UTC m=+781.656674366" observedRunningTime="2026-01-04 00:23:32.168030848 +0000 UTC m=+786.156595944" watchObservedRunningTime="2026-01-04 00:23:32.573530176 +0000 UTC m=+786.562095262" Jan 04 00:23:32 crc kubenswrapper[5108]: I0104 00:23:32.577181 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw"] Jan 04 00:23:32 crc kubenswrapper[5108]: W0104 00:23:32.632838 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed7bb44_e54e_4477_a030_1b100090455f.slice/crio-5f4617640af298973a7051ceb7a577a38b4e1b1cbd26a0197817f74de2ddf749 WatchSource:0}: Error finding container 5f4617640af298973a7051ceb7a577a38b4e1b1cbd26a0197817f74de2ddf749: Status 404 returned error can't find the container with id 5f4617640af298973a7051ceb7a577a38b4e1b1cbd26a0197817f74de2ddf749 Jan 04 00:23:33 crc kubenswrapper[5108]: I0104 00:23:33.046606 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" event={"ID":"6ed7bb44-e54e-4477-a030-1b100090455f","Type":"ContainerStarted","Data":"5f4617640af298973a7051ceb7a577a38b4e1b1cbd26a0197817f74de2ddf749"} Jan 04 00:23:33 crc kubenswrapper[5108]: I0104 00:23:33.272334 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t"] Jan 04 00:23:33 crc kubenswrapper[5108]: W0104 00:23:33.395928 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfc83f87_93c5_4a13_9807_1f22d71c0214.slice/crio-29f332709e379881daeb569a5519599caa7647a84767a6bf9bda11e11c23e815 WatchSource:0}: Error finding container 29f332709e379881daeb569a5519599caa7647a84767a6bf9bda11e11c23e815: Status 404 returned error can't find the container with id 29f332709e379881daeb569a5519599caa7647a84767a6bf9bda11e11c23e815 Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.039751 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wr2d5"] Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.101357 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wr2d5"] Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.101612 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.103587 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-catalog-content\") pod \"certified-operators-wr2d5\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.103659 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69zn2\" (UniqueName: \"kubernetes.io/projected/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-kube-api-access-69zn2\") pod \"certified-operators-wr2d5\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.103689 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-utilities\") pod \"certified-operators-wr2d5\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.184468 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" event={"ID":"6ed7bb44-e54e-4477-a030-1b100090455f","Type":"ContainerStarted","Data":"9f88f0b8250de6a0f041ad82533b3c439cb3ccffae49ae9e9404aaa5d95a8d95"} Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.186436 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" event={"ID":"bfc83f87-93c5-4a13-9807-1f22d71c0214","Type":"ContainerStarted","Data":"479d08938c366acc77ad344d41ceae10cc99c1f792c94733566b73205f05bab2"} Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.186521 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" event={"ID":"bfc83f87-93c5-4a13-9807-1f22d71c0214","Type":"ContainerStarted","Data":"29f332709e379881daeb569a5519599caa7647a84767a6bf9bda11e11c23e815"} Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.205636 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-catalog-content\") pod \"certified-operators-wr2d5\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.206803 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-catalog-content\") pod \"certified-operators-wr2d5\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.207352 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-69zn2\" (UniqueName: \"kubernetes.io/projected/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-kube-api-access-69zn2\") pod \"certified-operators-wr2d5\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.207409 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-utilities\") pod \"certified-operators-wr2d5\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.208853 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-utilities\") pod \"certified-operators-wr2d5\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.252096 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-69zn2\" (UniqueName: \"kubernetes.io/projected/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-kube-api-access-69zn2\") pod \"certified-operators-wr2d5\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.446138 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:34 crc kubenswrapper[5108]: I0104 00:23:34.785981 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wr2d5"] Jan 04 00:23:35 crc kubenswrapper[5108]: I0104 00:23:35.195052 5108 generic.go:358] "Generic (PLEG): container finished" podID="6ed7bb44-e54e-4477-a030-1b100090455f" containerID="9f88f0b8250de6a0f041ad82533b3c439cb3ccffae49ae9e9404aaa5d95a8d95" exitCode=0 Jan 04 00:23:35 crc kubenswrapper[5108]: I0104 00:23:35.195192 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" event={"ID":"6ed7bb44-e54e-4477-a030-1b100090455f","Type":"ContainerDied","Data":"9f88f0b8250de6a0f041ad82533b3c439cb3ccffae49ae9e9404aaa5d95a8d95"} Jan 04 00:23:35 crc kubenswrapper[5108]: I0104 00:23:35.197963 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wr2d5" event={"ID":"bfdb0a0d-28f5-46d9-ac35-c1e733798ded","Type":"ContainerStarted","Data":"6a2151008fdc073b9801e610656ad5d4780a50d675dbe78660703df67fbdbbcd"} Jan 04 00:23:35 crc kubenswrapper[5108]: I0104 00:23:35.201115 5108 generic.go:358] "Generic (PLEG): container finished" podID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerID="479d08938c366acc77ad344d41ceae10cc99c1f792c94733566b73205f05bab2" exitCode=0 Jan 04 00:23:35 crc kubenswrapper[5108]: I0104 00:23:35.201293 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" event={"ID":"bfc83f87-93c5-4a13-9807-1f22d71c0214","Type":"ContainerDied","Data":"479d08938c366acc77ad344d41ceae10cc99c1f792c94733566b73205f05bab2"} Jan 04 00:23:35 crc kubenswrapper[5108]: I0104 00:23:35.708767 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:35 crc kubenswrapper[5108]: I0104 00:23:35.709377 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:36 crc kubenswrapper[5108]: I0104 00:23:36.219993 5108 generic.go:358] "Generic (PLEG): container finished" podID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerID="e13a414fdb4776f51d21dc69fbb37293da9fb71a6d6585d262b3f0c157432740" exitCode=0 Jan 04 00:23:36 crc kubenswrapper[5108]: I0104 00:23:36.220188 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wr2d5" event={"ID":"bfdb0a0d-28f5-46d9-ac35-c1e733798ded","Type":"ContainerDied","Data":"e13a414fdb4776f51d21dc69fbb37293da9fb71a6d6585d262b3f0c157432740"} Jan 04 00:23:36 crc kubenswrapper[5108]: I0104 00:23:36.937719 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mw5wb" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerName="registry-server" probeResult="failure" output=< Jan 04 00:23:36 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Jan 04 00:23:36 crc kubenswrapper[5108]: > Jan 04 00:23:37 crc kubenswrapper[5108]: I0104 00:23:37.232824 5108 generic.go:358] "Generic (PLEG): container finished" podID="6ed7bb44-e54e-4477-a030-1b100090455f" containerID="15d8c383175151ccc7c39585f7c2f50bf48101917a6cfdeff4c41c8a53f44dcd" exitCode=0 Jan 04 00:23:37 crc kubenswrapper[5108]: I0104 00:23:37.233518 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" event={"ID":"6ed7bb44-e54e-4477-a030-1b100090455f","Type":"ContainerDied","Data":"15d8c383175151ccc7c39585f7c2f50bf48101917a6cfdeff4c41c8a53f44dcd"} Jan 04 00:23:38 crc kubenswrapper[5108]: I0104 00:23:38.243617 5108 generic.go:358] "Generic (PLEG): container finished" podID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerID="8cb11f04ba1417d25aa3f67cd774c50d37769a1d5434bd14c713e75ad287a121" exitCode=0 Jan 04 00:23:38 crc kubenswrapper[5108]: I0104 00:23:38.243958 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" event={"ID":"bfc83f87-93c5-4a13-9807-1f22d71c0214","Type":"ContainerDied","Data":"8cb11f04ba1417d25aa3f67cd774c50d37769a1d5434bd14c713e75ad287a121"} Jan 04 00:23:38 crc kubenswrapper[5108]: I0104 00:23:38.246998 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" event={"ID":"6ed7bb44-e54e-4477-a030-1b100090455f","Type":"ContainerStarted","Data":"8fb7a7b6c45a4b032f1f1d338ab3ba143f86f0c7e0f1f12691098c7fe55cea74"} Jan 04 00:23:38 crc kubenswrapper[5108]: I0104 00:23:38.248643 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wr2d5" event={"ID":"bfdb0a0d-28f5-46d9-ac35-c1e733798ded","Type":"ContainerStarted","Data":"f5324c558468e5a31669e9d2c91293f23844e3b7c51ef399d00ad41a22ce4019"} Jan 04 00:23:38 crc kubenswrapper[5108]: I0104 00:23:38.440153 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" podStartSLOduration=7.261771131 podStartE2EDuration="8.440128495s" podCreationTimestamp="2026-01-04 00:23:30 +0000 UTC" firstStartedPulling="2026-01-04 00:23:35.197427285 +0000 UTC m=+789.185992371" lastFinishedPulling="2026-01-04 00:23:36.375784649 +0000 UTC m=+790.364349735" observedRunningTime="2026-01-04 00:23:38.435821315 +0000 UTC m=+792.424386421" watchObservedRunningTime="2026-01-04 00:23:38.440128495 +0000 UTC m=+792.428693581" Jan 04 00:23:39 crc kubenswrapper[5108]: I0104 00:23:39.133324 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm"] Jan 04 00:23:39 crc kubenswrapper[5108]: I0104 00:23:39.258368 5108 generic.go:358] "Generic (PLEG): container finished" podID="6ed7bb44-e54e-4477-a030-1b100090455f" containerID="8fb7a7b6c45a4b032f1f1d338ab3ba143f86f0c7e0f1f12691098c7fe55cea74" exitCode=0 Jan 04 00:23:40 crc kubenswrapper[5108]: I0104 00:23:40.269000 5108 generic.go:358] "Generic (PLEG): container finished" podID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerID="f5324c558468e5a31669e9d2c91293f23844e3b7c51ef399d00ad41a22ce4019" exitCode=0 Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.279355 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm"] Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.279403 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" event={"ID":"6ed7bb44-e54e-4477-a030-1b100090455f","Type":"ContainerDied","Data":"8fb7a7b6c45a4b032f1f1d338ab3ba143f86f0c7e0f1f12691098c7fe55cea74"} Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.279441 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wr2d5" event={"ID":"bfdb0a0d-28f5-46d9-ac35-c1e733798ded","Type":"ContainerDied","Data":"f5324c558468e5a31669e9d2c91293f23844e3b7c51ef399d00ad41a22ce4019"} Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.280001 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.295595 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" event={"ID":"bfc83f87-93c5-4a13-9807-1f22d71c0214","Type":"ContainerStarted","Data":"48b928d2fb89693082484cdd92ea06c5cd195ce0503e6f9a1375529c1a9b0025"} Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.423939 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.424578 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqjn\" (UniqueName: \"kubernetes.io/projected/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-kube-api-access-tpqjn\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.424654 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.525704 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.525814 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.525856 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tpqjn\" (UniqueName: \"kubernetes.io/projected/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-kube-api-access-tpqjn\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.527219 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.527345 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.569495 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpqjn\" (UniqueName: \"kubernetes.io/projected/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-kube-api-access-tpqjn\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:41 crc kubenswrapper[5108]: I0104 00:23:41.720978 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:23:42 crc kubenswrapper[5108]: I0104 00:23:42.338427 5108 generic.go:358] "Generic (PLEG): container finished" podID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerID="48b928d2fb89693082484cdd92ea06c5cd195ce0503e6f9a1375529c1a9b0025" exitCode=0 Jan 04 00:23:42 crc kubenswrapper[5108]: I0104 00:23:42.339807 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" event={"ID":"bfc83f87-93c5-4a13-9807-1f22d71c0214","Type":"ContainerDied","Data":"48b928d2fb89693082484cdd92ea06c5cd195ce0503e6f9a1375529c1a9b0025"} Jan 04 00:23:42 crc kubenswrapper[5108]: I0104 00:23:42.492886 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm"] Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.102174 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.140417 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-util\") pod \"6ed7bb44-e54e-4477-a030-1b100090455f\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.140691 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-bundle\") pod \"6ed7bb44-e54e-4477-a030-1b100090455f\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.140718 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g28st\" (UniqueName: \"kubernetes.io/projected/6ed7bb44-e54e-4477-a030-1b100090455f-kube-api-access-g28st\") pod \"6ed7bb44-e54e-4477-a030-1b100090455f\" (UID: \"6ed7bb44-e54e-4477-a030-1b100090455f\") " Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.142345 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-bundle" (OuterVolumeSpecName: "bundle") pod "6ed7bb44-e54e-4477-a030-1b100090455f" (UID: "6ed7bb44-e54e-4477-a030-1b100090455f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.162899 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-util" (OuterVolumeSpecName: "util") pod "6ed7bb44-e54e-4477-a030-1b100090455f" (UID: "6ed7bb44-e54e-4477-a030-1b100090455f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.175504 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed7bb44-e54e-4477-a030-1b100090455f-kube-api-access-g28st" (OuterVolumeSpecName: "kube-api-access-g28st") pod "6ed7bb44-e54e-4477-a030-1b100090455f" (UID: "6ed7bb44-e54e-4477-a030-1b100090455f"). InnerVolumeSpecName "kube-api-access-g28st". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.254351 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-util\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.254431 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ed7bb44-e54e-4477-a030-1b100090455f-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.254446 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g28st\" (UniqueName: \"kubernetes.io/projected/6ed7bb44-e54e-4477-a030-1b100090455f-kube-api-access-g28st\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.355014 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" event={"ID":"6ed7bb44-e54e-4477-a030-1b100090455f","Type":"ContainerDied","Data":"5f4617640af298973a7051ceb7a577a38b4e1b1cbd26a0197817f74de2ddf749"} Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.355093 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f4617640af298973a7051ceb7a577a38b4e1b1cbd26a0197817f74de2ddf749" Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.355288 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw" Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.366481 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" event={"ID":"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72","Type":"ContainerStarted","Data":"ed537e136d95ad77260c3e03eb0fed98dd4e7af6fff9b4b28f3553a66004b28b"} Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.366539 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" event={"ID":"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72","Type":"ContainerStarted","Data":"0a85a886710f227b8dbb7c4511a1cce752874e039149cc86c8c635c05ef410a6"} Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.393271 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wr2d5" event={"ID":"bfdb0a0d-28f5-46d9-ac35-c1e733798ded","Type":"ContainerStarted","Data":"c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda"} Jan 04 00:23:43 crc kubenswrapper[5108]: I0104 00:23:43.977150 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.078154 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-bundle\") pod \"bfc83f87-93c5-4a13-9807-1f22d71c0214\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.078289 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-util\") pod \"bfc83f87-93c5-4a13-9807-1f22d71c0214\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.078437 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x498\" (UniqueName: \"kubernetes.io/projected/bfc83f87-93c5-4a13-9807-1f22d71c0214-kube-api-access-2x498\") pod \"bfc83f87-93c5-4a13-9807-1f22d71c0214\" (UID: \"bfc83f87-93c5-4a13-9807-1f22d71c0214\") " Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.079249 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-bundle" (OuterVolumeSpecName: "bundle") pod "bfc83f87-93c5-4a13-9807-1f22d71c0214" (UID: "bfc83f87-93c5-4a13-9807-1f22d71c0214"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.093852 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-util" (OuterVolumeSpecName: "util") pod "bfc83f87-93c5-4a13-9807-1f22d71c0214" (UID: "bfc83f87-93c5-4a13-9807-1f22d71c0214"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.101502 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc83f87-93c5-4a13-9807-1f22d71c0214-kube-api-access-2x498" (OuterVolumeSpecName: "kube-api-access-2x498") pod "bfc83f87-93c5-4a13-9807-1f22d71c0214" (UID: "bfc83f87-93c5-4a13-9807-1f22d71c0214"). InnerVolumeSpecName "kube-api-access-2x498". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.180340 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2x498\" (UniqueName: \"kubernetes.io/projected/bfc83f87-93c5-4a13-9807-1f22d71c0214-kube-api-access-2x498\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.180392 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.180402 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bfc83f87-93c5-4a13-9807-1f22d71c0214-util\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.402882 5108 generic.go:358] "Generic (PLEG): container finished" podID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerID="ed537e136d95ad77260c3e03eb0fed98dd4e7af6fff9b4b28f3553a66004b28b" exitCode=0 Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.402973 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" event={"ID":"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72","Type":"ContainerDied","Data":"ed537e136d95ad77260c3e03eb0fed98dd4e7af6fff9b4b28f3553a66004b28b"} Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.408436 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.409812 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t" event={"ID":"bfc83f87-93c5-4a13-9807-1f22d71c0214","Type":"ContainerDied","Data":"29f332709e379881daeb569a5519599caa7647a84767a6bf9bda11e11c23e815"} Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.409880 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f332709e379881daeb569a5519599caa7647a84767a6bf9bda11e11c23e815" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.446982 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.447042 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:44 crc kubenswrapper[5108]: I0104 00:23:44.475407 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wr2d5" podStartSLOduration=9.614196614 podStartE2EDuration="10.475382757s" podCreationTimestamp="2026-01-04 00:23:34 +0000 UTC" firstStartedPulling="2026-01-04 00:23:36.221131108 +0000 UTC m=+790.209696204" lastFinishedPulling="2026-01-04 00:23:37.082317261 +0000 UTC m=+791.070882347" observedRunningTime="2026-01-04 00:23:44.470297195 +0000 UTC m=+798.458862291" watchObservedRunningTime="2026-01-04 00:23:44.475382757 +0000 UTC m=+798.463947843" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.163672 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165278 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerName="extract" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165311 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerName="extract" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165339 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerName="util" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165350 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerName="util" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165361 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerName="pull" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165369 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerName="pull" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165383 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ed7bb44-e54e-4477-a030-1b100090455f" containerName="pull" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165391 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed7bb44-e54e-4477-a030-1b100090455f" containerName="pull" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165422 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ed7bb44-e54e-4477-a030-1b100090455f" containerName="util" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165430 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed7bb44-e54e-4477-a030-1b100090455f" containerName="util" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165444 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ed7bb44-e54e-4477-a030-1b100090455f" containerName="extract" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165454 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed7bb44-e54e-4477-a030-1b100090455f" containerName="extract" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165613 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="6ed7bb44-e54e-4477-a030-1b100090455f" containerName="extract" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.165630 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="bfc83f87-93c5-4a13-9807-1f22d71c0214" containerName="extract" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.237257 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.237556 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.240534 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.241803 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.249907 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-wl6z6\"" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.295504 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7klc2\" (UniqueName: \"kubernetes.io/projected/ca471b6c-8fa7-4c07-ad6f-1b8191b591be-kube-api-access-7klc2\") pod \"obo-prometheus-operator-9bc85b4bf-sqk2p\" (UID: \"ca471b6c-8fa7-4c07-ad6f-1b8191b591be\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.297364 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.307612 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.318241 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-qfrnz\"" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.318545 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.323317 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.327152 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.341926 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.394498 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.397214 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7klc2\" (UniqueName: \"kubernetes.io/projected/ca471b6c-8fa7-4c07-ad6f-1b8191b591be-kube-api-access-7klc2\") pod \"obo-prometheus-operator-9bc85b4bf-sqk2p\" (UID: \"ca471b6c-8fa7-4c07-ad6f-1b8191b591be\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.397358 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c93782ed-1966-449f-b093-10a0e0380729-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-97jkv\" (UID: \"c93782ed-1966-449f-b093-10a0e0380729\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.397469 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c93782ed-1966-449f-b093-10a0e0380729-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-97jkv\" (UID: \"c93782ed-1966-449f-b093-10a0e0380729\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.397611 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a6c9033-f6ec-4239-94fa-43ed16239b94-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-678rm\" (UID: \"7a6c9033-f6ec-4239-94fa-43ed16239b94\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.397699 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a6c9033-f6ec-4239-94fa-43ed16239b94-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-678rm\" (UID: \"7a6c9033-f6ec-4239-94fa-43ed16239b94\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.435348 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7klc2\" (UniqueName: \"kubernetes.io/projected/ca471b6c-8fa7-4c07-ad6f-1b8191b591be-kube-api-access-7klc2\") pod \"obo-prometheus-operator-9bc85b4bf-sqk2p\" (UID: \"ca471b6c-8fa7-4c07-ad6f-1b8191b591be\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.496305 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-5cwsw"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.501150 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c93782ed-1966-449f-b093-10a0e0380729-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-97jkv\" (UID: \"c93782ed-1966-449f-b093-10a0e0380729\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.501228 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c93782ed-1966-449f-b093-10a0e0380729-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-97jkv\" (UID: \"c93782ed-1966-449f-b093-10a0e0380729\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.501266 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a6c9033-f6ec-4239-94fa-43ed16239b94-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-678rm\" (UID: \"7a6c9033-f6ec-4239-94fa-43ed16239b94\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.501287 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a6c9033-f6ec-4239-94fa-43ed16239b94-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-678rm\" (UID: \"7a6c9033-f6ec-4239-94fa-43ed16239b94\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.506950 5108 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wr2d5" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="registry-server" probeResult="failure" output=< Jan 04 00:23:45 crc kubenswrapper[5108]: timeout: failed to connect service ":50051" within 1s Jan 04 00:23:45 crc kubenswrapper[5108]: > Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.510087 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a6c9033-f6ec-4239-94fa-43ed16239b94-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-678rm\" (UID: \"7a6c9033-f6ec-4239-94fa-43ed16239b94\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.518397 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.528772 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c93782ed-1966-449f-b093-10a0e0380729-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-97jkv\" (UID: \"c93782ed-1966-449f-b093-10a0e0380729\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.529914 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-jr92q\"" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.530017 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.531145 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c93782ed-1966-449f-b093-10a0e0380729-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-97jkv\" (UID: \"c93782ed-1966-449f-b093-10a0e0380729\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.539424 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-5cwsw"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.549940 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a6c9033-f6ec-4239-94fa-43ed16239b94-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7687c6569-678rm\" (UID: \"7a6c9033-f6ec-4239-94fa-43ed16239b94\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.559429 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.602961 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a2116a4-eb62-4e6e-99f5-22d8dfed008a-observability-operator-tls\") pod \"observability-operator-85c68dddb-5cwsw\" (UID: \"5a2116a4-eb62-4e6e-99f5-22d8dfed008a\") " pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.603008 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg2rz\" (UniqueName: \"kubernetes.io/projected/5a2116a4-eb62-4e6e-99f5-22d8dfed008a-kube-api-access-dg2rz\") pod \"observability-operator-85c68dddb-5cwsw\" (UID: \"5a2116a4-eb62-4e6e-99f5-22d8dfed008a\") " pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.637721 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.656240 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-r52pf"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.658929 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.694845 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-r52pf"] Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.695106 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.700694 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-bsfth\"" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.709854 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a2116a4-eb62-4e6e-99f5-22d8dfed008a-observability-operator-tls\") pod \"observability-operator-85c68dddb-5cwsw\" (UID: \"5a2116a4-eb62-4e6e-99f5-22d8dfed008a\") " pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.709898 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dg2rz\" (UniqueName: \"kubernetes.io/projected/5a2116a4-eb62-4e6e-99f5-22d8dfed008a-kube-api-access-dg2rz\") pod \"observability-operator-85c68dddb-5cwsw\" (UID: \"5a2116a4-eb62-4e6e-99f5-22d8dfed008a\") " pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.715794 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a2116a4-eb62-4e6e-99f5-22d8dfed008a-observability-operator-tls\") pod \"observability-operator-85c68dddb-5cwsw\" (UID: \"5a2116a4-eb62-4e6e-99f5-22d8dfed008a\") " pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.759950 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg2rz\" (UniqueName: \"kubernetes.io/projected/5a2116a4-eb62-4e6e-99f5-22d8dfed008a-kube-api-access-dg2rz\") pod \"observability-operator-85c68dddb-5cwsw\" (UID: \"5a2116a4-eb62-4e6e-99f5-22d8dfed008a\") " pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.822448 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtv89\" (UniqueName: \"kubernetes.io/projected/f9774351-84ab-432f-a137-73c8ccd87ead-kube-api-access-wtv89\") pod \"perses-operator-669c9f96b5-r52pf\" (UID: \"f9774351-84ab-432f-a137-73c8ccd87ead\") " pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.822517 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9774351-84ab-432f-a137-73c8ccd87ead-openshift-service-ca\") pod \"perses-operator-669c9f96b5-r52pf\" (UID: \"f9774351-84ab-432f-a137-73c8ccd87ead\") " pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.936708 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.939097 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.940027 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtv89\" (UniqueName: \"kubernetes.io/projected/f9774351-84ab-432f-a137-73c8ccd87ead-kube-api-access-wtv89\") pod \"perses-operator-669c9f96b5-r52pf\" (UID: \"f9774351-84ab-432f-a137-73c8ccd87ead\") " pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.940063 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9774351-84ab-432f-a137-73c8ccd87ead-openshift-service-ca\") pod \"perses-operator-669c9f96b5-r52pf\" (UID: \"f9774351-84ab-432f-a137-73c8ccd87ead\") " pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.941936 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9774351-84ab-432f-a137-73c8ccd87ead-openshift-service-ca\") pod \"perses-operator-669c9f96b5-r52pf\" (UID: \"f9774351-84ab-432f-a137-73c8ccd87ead\") " pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:23:45 crc kubenswrapper[5108]: I0104 00:23:45.977045 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtv89\" (UniqueName: \"kubernetes.io/projected/f9774351-84ab-432f-a137-73c8ccd87ead-kube-api-access-wtv89\") pod \"perses-operator-669c9f96b5-r52pf\" (UID: \"f9774351-84ab-432f-a137-73c8ccd87ead\") " pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:23:46 crc kubenswrapper[5108]: I0104 00:23:46.022896 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:23:46 crc kubenswrapper[5108]: I0104 00:23:46.025987 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.039994 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv"] Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.078016 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p"] Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.116588 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm"] Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.190928 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-r52pf"] Jan 04 00:23:47 crc kubenswrapper[5108]: W0104 00:23:47.225495 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9774351_84ab_432f_a137_73c8ccd87ead.slice/crio-c65f8bb355585ae6458c289d8966cf4b8e50774d3dedfec5c0558e3884bbc545 WatchSource:0}: Error finding container c65f8bb355585ae6458c289d8966cf4b8e50774d3dedfec5c0558e3884bbc545: Status 404 returned error can't find the container with id c65f8bb355585ae6458c289d8966cf4b8e50774d3dedfec5c0558e3884bbc545 Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.242641 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mw5wb"] Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.263003 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-5cwsw"] Jan 04 00:23:47 crc kubenswrapper[5108]: W0104 00:23:47.288636 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a2116a4_eb62_4e6e_99f5_22d8dfed008a.slice/crio-ce9f976e5a8ef631433600df9d542c704760246bf75cf9573722d6ba81a75f90 WatchSource:0}: Error finding container ce9f976e5a8ef631433600df9d542c704760246bf75cf9573722d6ba81a75f90: Status 404 returned error can't find the container with id ce9f976e5a8ef631433600df9d542c704760246bf75cf9573722d6ba81a75f90 Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.510922 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" event={"ID":"7a6c9033-f6ec-4239-94fa-43ed16239b94","Type":"ContainerStarted","Data":"216f0f15343a9e9de70bcada9b3c0176a30fa8f6246af8d98a4dbc2bca4abfed"} Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.522809 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p" event={"ID":"ca471b6c-8fa7-4c07-ad6f-1b8191b591be","Type":"ContainerStarted","Data":"305bf36574319572350f42b9a60d2aafd63ac2de90f4d62b167d2895128b517f"} Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.525083 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" event={"ID":"c93782ed-1966-449f-b093-10a0e0380729","Type":"ContainerStarted","Data":"d50729f35953fde71c48e5ce37c7a8654cb159f07957143cce0128dcd9efc530"} Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.533370 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-r52pf" event={"ID":"f9774351-84ab-432f-a137-73c8ccd87ead","Type":"ContainerStarted","Data":"c65f8bb355585ae6458c289d8966cf4b8e50774d3dedfec5c0558e3884bbc545"} Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.538831 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-5cwsw" event={"ID":"5a2116a4-eb62-4e6e-99f5-22d8dfed008a","Type":"ContainerStarted","Data":"ce9f976e5a8ef631433600df9d542c704760246bf75cf9573722d6ba81a75f90"} Jan 04 00:23:47 crc kubenswrapper[5108]: I0104 00:23:47.539802 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mw5wb" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerName="registry-server" containerID="cri-o://ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940" gracePeriod=2 Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.135130 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.186041 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qv78\" (UniqueName: \"kubernetes.io/projected/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-kube-api-access-5qv78\") pod \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.186279 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-catalog-content\") pod \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.186442 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-utilities\") pod \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\" (UID: \"c08c9c29-bbd0-47ae-8449-0a08a5a97f86\") " Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.187855 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-utilities" (OuterVolumeSpecName: "utilities") pod "c08c9c29-bbd0-47ae-8449-0a08a5a97f86" (UID: "c08c9c29-bbd0-47ae-8449-0a08a5a97f86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.200410 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-kube-api-access-5qv78" (OuterVolumeSpecName: "kube-api-access-5qv78") pod "c08c9c29-bbd0-47ae-8449-0a08a5a97f86" (UID: "c08c9c29-bbd0-47ae-8449-0a08a5a97f86"). InnerVolumeSpecName "kube-api-access-5qv78". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.288588 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.288642 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5qv78\" (UniqueName: \"kubernetes.io/projected/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-kube-api-access-5qv78\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.308464 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c08c9c29-bbd0-47ae-8449-0a08a5a97f86" (UID: "c08c9c29-bbd0-47ae-8449-0a08a5a97f86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.390175 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08c9c29-bbd0-47ae-8449-0a08a5a97f86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.553261 5108 generic.go:358] "Generic (PLEG): container finished" podID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerID="ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940" exitCode=0 Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.553432 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw5wb" event={"ID":"c08c9c29-bbd0-47ae-8449-0a08a5a97f86","Type":"ContainerDied","Data":"ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940"} Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.553471 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mw5wb" event={"ID":"c08c9c29-bbd0-47ae-8449-0a08a5a97f86","Type":"ContainerDied","Data":"730cbbdcc1bc630671b9910c1fb39a6f4d488b2bcccd1e03adfcf7fd20d5ebae"} Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.553494 5108 scope.go:117] "RemoveContainer" containerID="ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.553729 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mw5wb" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.617519 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mw5wb"] Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.623318 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mw5wb"] Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.711496 5108 scope.go:117] "RemoveContainer" containerID="48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.865415 5108 scope.go:117] "RemoveContainer" containerID="32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.976939 5108 scope.go:117] "RemoveContainer" containerID="ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940" Jan 04 00:23:48 crc kubenswrapper[5108]: E0104 00:23:48.979122 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940\": container with ID starting with ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940 not found: ID does not exist" containerID="ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.979213 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940"} err="failed to get container status \"ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940\": rpc error: code = NotFound desc = could not find container \"ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940\": container with ID starting with ab123c36bbce949a15a3f00c44078bff8d811792207be0281ca0799822013940 not found: ID does not exist" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.979261 5108 scope.go:117] "RemoveContainer" containerID="48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0" Jan 04 00:23:48 crc kubenswrapper[5108]: E0104 00:23:48.988429 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0\": container with ID starting with 48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0 not found: ID does not exist" containerID="48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.988529 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0"} err="failed to get container status \"48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0\": rpc error: code = NotFound desc = could not find container \"48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0\": container with ID starting with 48ffa02dab4807df5e7f7b89ad1b25f6ac59edf4b5f6fb7401a831e67faeb7d0 not found: ID does not exist" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.988560 5108 scope.go:117] "RemoveContainer" containerID="32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050" Jan 04 00:23:48 crc kubenswrapper[5108]: E0104 00:23:48.992946 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050\": container with ID starting with 32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050 not found: ID does not exist" containerID="32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050" Jan 04 00:23:48 crc kubenswrapper[5108]: I0104 00:23:48.992986 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050"} err="failed to get container status \"32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050\": rpc error: code = NotFound desc = could not find container \"32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050\": container with ID starting with 32bd764eff9734f54132dd0263fca9e406070d1a2f097359de70d0b139073050 not found: ID does not exist" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.458389 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-4srpf"] Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.459236 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerName="extract-utilities" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.459255 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerName="extract-utilities" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.459303 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerName="registry-server" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.459310 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerName="registry-server" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.459320 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerName="extract-content" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.459328 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerName="extract-content" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.459452 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" containerName="registry-server" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.720096 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-4srpf"] Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.720259 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-4srpf" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.733802 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.733993 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-88k6p\"" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.734121 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.824663 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bws5\" (UniqueName: \"kubernetes.io/projected/32c613a6-4c2a-4804-8c97-0746939b3441-kube-api-access-8bws5\") pod \"interconnect-operator-78b9bd8798-4srpf\" (UID: \"32c613a6-4c2a-4804-8c97-0746939b3441\") " pod="service-telemetry/interconnect-operator-78b9bd8798-4srpf" Jan 04 00:23:49 crc kubenswrapper[5108]: I0104 00:23:49.926153 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8bws5\" (UniqueName: \"kubernetes.io/projected/32c613a6-4c2a-4804-8c97-0746939b3441-kube-api-access-8bws5\") pod \"interconnect-operator-78b9bd8798-4srpf\" (UID: \"32c613a6-4c2a-4804-8c97-0746939b3441\") " pod="service-telemetry/interconnect-operator-78b9bd8798-4srpf" Jan 04 00:23:50 crc kubenswrapper[5108]: I0104 00:23:50.018521 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bws5\" (UniqueName: \"kubernetes.io/projected/32c613a6-4c2a-4804-8c97-0746939b3441-kube-api-access-8bws5\") pod \"interconnect-operator-78b9bd8798-4srpf\" (UID: \"32c613a6-4c2a-4804-8c97-0746939b3441\") " pod="service-telemetry/interconnect-operator-78b9bd8798-4srpf" Jan 04 00:23:50 crc kubenswrapper[5108]: I0104 00:23:50.050669 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-4srpf" Jan 04 00:23:50 crc kubenswrapper[5108]: I0104 00:23:50.469276 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08c9c29-bbd0-47ae-8449-0a08a5a97f86" path="/var/lib/kubelet/pods/c08c9c29-bbd0-47ae-8449-0a08a5a97f86/volumes" Jan 04 00:23:50 crc kubenswrapper[5108]: I0104 00:23:50.697715 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-4srpf"] Jan 04 00:23:51 crc kubenswrapper[5108]: I0104 00:23:51.604789 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-4srpf" event={"ID":"32c613a6-4c2a-4804-8c97-0746939b3441","Type":"ContainerStarted","Data":"512bd63f985f0a5ea90a2aa35cc3f15a9ab0172c7c75111ad7c52b261e6a7e3d"} Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.111565 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7977f944b6-cmftp"] Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.125361 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.127992 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.128039 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-5xlq2\"" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.136048 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7977f944b6-cmftp"] Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.177447 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqs9h\" (UniqueName: \"kubernetes.io/projected/ae32f373-1730-41c4-9061-ff7573625e17-kube-api-access-hqs9h\") pod \"elastic-operator-7977f944b6-cmftp\" (UID: \"ae32f373-1730-41c4-9061-ff7573625e17\") " pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.177530 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae32f373-1730-41c4-9061-ff7573625e17-apiservice-cert\") pod \"elastic-operator-7977f944b6-cmftp\" (UID: \"ae32f373-1730-41c4-9061-ff7573625e17\") " pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.178113 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae32f373-1730-41c4-9061-ff7573625e17-webhook-cert\") pod \"elastic-operator-7977f944b6-cmftp\" (UID: \"ae32f373-1730-41c4-9061-ff7573625e17\") " pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.280003 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hqs9h\" (UniqueName: \"kubernetes.io/projected/ae32f373-1730-41c4-9061-ff7573625e17-kube-api-access-hqs9h\") pod \"elastic-operator-7977f944b6-cmftp\" (UID: \"ae32f373-1730-41c4-9061-ff7573625e17\") " pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.280082 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae32f373-1730-41c4-9061-ff7573625e17-apiservice-cert\") pod \"elastic-operator-7977f944b6-cmftp\" (UID: \"ae32f373-1730-41c4-9061-ff7573625e17\") " pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.280137 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae32f373-1730-41c4-9061-ff7573625e17-webhook-cert\") pod \"elastic-operator-7977f944b6-cmftp\" (UID: \"ae32f373-1730-41c4-9061-ff7573625e17\") " pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.306517 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae32f373-1730-41c4-9061-ff7573625e17-apiservice-cert\") pod \"elastic-operator-7977f944b6-cmftp\" (UID: \"ae32f373-1730-41c4-9061-ff7573625e17\") " pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.306540 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae32f373-1730-41c4-9061-ff7573625e17-webhook-cert\") pod \"elastic-operator-7977f944b6-cmftp\" (UID: \"ae32f373-1730-41c4-9061-ff7573625e17\") " pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.314102 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqs9h\" (UniqueName: \"kubernetes.io/projected/ae32f373-1730-41c4-9061-ff7573625e17-kube-api-access-hqs9h\") pod \"elastic-operator-7977f944b6-cmftp\" (UID: \"ae32f373-1730-41c4-9061-ff7573625e17\") " pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:52 crc kubenswrapper[5108]: I0104 00:23:52.460339 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7977f944b6-cmftp" Jan 04 00:23:54 crc kubenswrapper[5108]: I0104 00:23:54.511584 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:54 crc kubenswrapper[5108]: I0104 00:23:54.654545 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:23:57 crc kubenswrapper[5108]: I0104 00:23:57.629081 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wr2d5"] Jan 04 00:23:57 crc kubenswrapper[5108]: I0104 00:23:57.629980 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wr2d5" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="registry-server" containerID="cri-o://c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda" gracePeriod=2 Jan 04 00:23:58 crc kubenswrapper[5108]: I0104 00:23:58.769889 5108 generic.go:358] "Generic (PLEG): container finished" podID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerID="c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda" exitCode=0 Jan 04 00:23:58 crc kubenswrapper[5108]: I0104 00:23:58.769984 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wr2d5" event={"ID":"bfdb0a0d-28f5-46d9-ac35-c1e733798ded","Type":"ContainerDied","Data":"c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda"} Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.139954 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458104-rknh4"] Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.145409 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458104-rknh4" Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.153177 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.153583 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.153701 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.158482 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458104-rknh4"] Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.239969 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2vx\" (UniqueName: \"kubernetes.io/projected/2a194bab-cf96-4d6b-b9f6-60bdc5c57621-kube-api-access-zb2vx\") pod \"auto-csr-approver-29458104-rknh4\" (UID: \"2a194bab-cf96-4d6b-b9f6-60bdc5c57621\") " pod="openshift-infra/auto-csr-approver-29458104-rknh4" Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.341350 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zb2vx\" (UniqueName: \"kubernetes.io/projected/2a194bab-cf96-4d6b-b9f6-60bdc5c57621-kube-api-access-zb2vx\") pod \"auto-csr-approver-29458104-rknh4\" (UID: \"2a194bab-cf96-4d6b-b9f6-60bdc5c57621\") " pod="openshift-infra/auto-csr-approver-29458104-rknh4" Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.365817 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb2vx\" (UniqueName: \"kubernetes.io/projected/2a194bab-cf96-4d6b-b9f6-60bdc5c57621-kube-api-access-zb2vx\") pod \"auto-csr-approver-29458104-rknh4\" (UID: \"2a194bab-cf96-4d6b-b9f6-60bdc5c57621\") " pod="openshift-infra/auto-csr-approver-29458104-rknh4" Jan 04 00:24:00 crc kubenswrapper[5108]: I0104 00:24:00.469688 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458104-rknh4" Jan 04 00:24:04 crc kubenswrapper[5108]: E0104 00:24:04.516482 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda is running failed: container process not found" containerID="c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda" cmd=["grpc_health_probe","-addr=:50051"] Jan 04 00:24:04 crc kubenswrapper[5108]: E0104 00:24:04.519922 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda is running failed: container process not found" containerID="c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda" cmd=["grpc_health_probe","-addr=:50051"] Jan 04 00:24:04 crc kubenswrapper[5108]: E0104 00:24:04.521024 5108 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda is running failed: container process not found" containerID="c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda" cmd=["grpc_health_probe","-addr=:50051"] Jan 04 00:24:04 crc kubenswrapper[5108]: E0104 00:24:04.521161 5108 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-wr2d5" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="registry-server" probeResult="unknown" Jan 04 00:24:10 crc kubenswrapper[5108]: I0104 00:24:10.808686 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:24:10 crc kubenswrapper[5108]: I0104 00:24:10.967657 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-catalog-content\") pod \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " Jan 04 00:24:10 crc kubenswrapper[5108]: I0104 00:24:10.968413 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-utilities\") pod \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " Jan 04 00:24:10 crc kubenswrapper[5108]: I0104 00:24:10.969529 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-utilities" (OuterVolumeSpecName: "utilities") pod "bfdb0a0d-28f5-46d9-ac35-c1e733798ded" (UID: "bfdb0a0d-28f5-46d9-ac35-c1e733798ded"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:24:10 crc kubenswrapper[5108]: I0104 00:24:10.969691 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69zn2\" (UniqueName: \"kubernetes.io/projected/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-kube-api-access-69zn2\") pod \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\" (UID: \"bfdb0a0d-28f5-46d9-ac35-c1e733798ded\") " Jan 04 00:24:10 crc kubenswrapper[5108]: I0104 00:24:10.971413 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:24:10 crc kubenswrapper[5108]: I0104 00:24:10.983114 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-kube-api-access-69zn2" (OuterVolumeSpecName: "kube-api-access-69zn2") pod "bfdb0a0d-28f5-46d9-ac35-c1e733798ded" (UID: "bfdb0a0d-28f5-46d9-ac35-c1e733798ded"). InnerVolumeSpecName "kube-api-access-69zn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:24:11 crc kubenswrapper[5108]: I0104 00:24:11.023062 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfdb0a0d-28f5-46d9-ac35-c1e733798ded" (UID: "bfdb0a0d-28f5-46d9-ac35-c1e733798ded"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:24:11 crc kubenswrapper[5108]: I0104 00:24:11.073216 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-69zn2\" (UniqueName: \"kubernetes.io/projected/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-kube-api-access-69zn2\") on node \"crc\" DevicePath \"\"" Jan 04 00:24:11 crc kubenswrapper[5108]: I0104 00:24:11.073264 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdb0a0d-28f5-46d9-ac35-c1e733798ded-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:24:11 crc kubenswrapper[5108]: I0104 00:24:11.098113 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wr2d5" event={"ID":"bfdb0a0d-28f5-46d9-ac35-c1e733798ded","Type":"ContainerDied","Data":"6a2151008fdc073b9801e610656ad5d4780a50d675dbe78660703df67fbdbbcd"} Jan 04 00:24:11 crc kubenswrapper[5108]: I0104 00:24:11.098243 5108 scope.go:117] "RemoveContainer" containerID="c3144d5c2eeccb2ac481bd0c2a7de14ee741208cda01ca38c0d854431b449dda" Jan 04 00:24:11 crc kubenswrapper[5108]: I0104 00:24:11.098489 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wr2d5" Jan 04 00:24:11 crc kubenswrapper[5108]: I0104 00:24:11.133278 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wr2d5"] Jan 04 00:24:11 crc kubenswrapper[5108]: I0104 00:24:11.139387 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wr2d5"] Jan 04 00:24:12 crc kubenswrapper[5108]: I0104 00:24:12.464601 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" path="/var/lib/kubelet/pods/bfdb0a0d-28f5-46d9-ac35-c1e733798ded/volumes" Jan 04 00:24:16 crc kubenswrapper[5108]: I0104 00:24:16.503775 5108 scope.go:117] "RemoveContainer" containerID="f5324c558468e5a31669e9d2c91293f23844e3b7c51ef399d00ad41a22ce4019" Jan 04 00:24:16 crc kubenswrapper[5108]: I0104 00:24:16.573956 5108 scope.go:117] "RemoveContainer" containerID="e13a414fdb4776f51d21dc69fbb37293da9fb71a6d6585d262b3f0c157432740" Jan 04 00:24:16 crc kubenswrapper[5108]: I0104 00:24:16.790231 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7977f944b6-cmftp"] Jan 04 00:24:16 crc kubenswrapper[5108]: W0104 00:24:16.851854 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae32f373_1730_41c4_9061_ff7573625e17.slice/crio-b0f575824acf2f6e06a770f9b463afe296fe6683025f62d6294aca2b4420a037 WatchSource:0}: Error finding container b0f575824acf2f6e06a770f9b463afe296fe6683025f62d6294aca2b4420a037: Status 404 returned error can't find the container with id b0f575824acf2f6e06a770f9b463afe296fe6683025f62d6294aca2b4420a037 Jan 04 00:24:16 crc kubenswrapper[5108]: I0104 00:24:16.977739 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458104-rknh4"] Jan 04 00:24:17 crc kubenswrapper[5108]: I0104 00:24:17.167857 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7977f944b6-cmftp" event={"ID":"ae32f373-1730-41c4-9061-ff7573625e17","Type":"ContainerStarted","Data":"b0f575824acf2f6e06a770f9b463afe296fe6683025f62d6294aca2b4420a037"} Jan 04 00:24:17 crc kubenswrapper[5108]: I0104 00:24:17.170803 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" event={"ID":"7a6c9033-f6ec-4239-94fa-43ed16239b94","Type":"ContainerStarted","Data":"e170c55818a393aedb93afa1131065caf2489ecb9dea8a1f581c688dbbc2190a"} Jan 04 00:24:17 crc kubenswrapper[5108]: I0104 00:24:17.173774 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458104-rknh4" event={"ID":"2a194bab-cf96-4d6b-b9f6-60bdc5c57621","Type":"ContainerStarted","Data":"c30ac569f3f61337ba27707fbb547585dae537492891523e4dfe6ad4ac949385"} Jan 04 00:24:17 crc kubenswrapper[5108]: I0104 00:24:17.176544 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" event={"ID":"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72","Type":"ContainerStarted","Data":"cce3129a6b10e3206d1bfa397d217c541479996920fce3acbffdc859e578343c"} Jan 04 00:24:17 crc kubenswrapper[5108]: I0104 00:24:17.199041 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-678rm" podStartSLOduration=2.860112979 podStartE2EDuration="32.199006671s" podCreationTimestamp="2026-01-04 00:23:45 +0000 UTC" firstStartedPulling="2026-01-04 00:23:47.165340454 +0000 UTC m=+801.153905540" lastFinishedPulling="2026-01-04 00:24:16.504234146 +0000 UTC m=+830.492799232" observedRunningTime="2026-01-04 00:24:17.196677067 +0000 UTC m=+831.185242153" watchObservedRunningTime="2026-01-04 00:24:17.199006671 +0000 UTC m=+831.187571767" Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.201690 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p" event={"ID":"ca471b6c-8fa7-4c07-ad6f-1b8191b591be","Type":"ContainerStarted","Data":"1e49527d769ee11196af3e5b788cecddce39cb88bb2c68cc8517c0f1b9d4c430"} Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.204619 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" event={"ID":"c93782ed-1966-449f-b093-10a0e0380729","Type":"ContainerStarted","Data":"a9a836f097aece544b447cf2ec7abf8b38cf3d513204fa999ce05cf640d3199a"} Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.207968 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-4srpf" event={"ID":"32c613a6-4c2a-4804-8c97-0746939b3441","Type":"ContainerStarted","Data":"043b05a8323eb97ff85719879251737d06f8d2e8faa57d1d919badfbfa86ef5c"} Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.209790 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-r52pf" event={"ID":"f9774351-84ab-432f-a137-73c8ccd87ead","Type":"ContainerStarted","Data":"6e12445e6905078c9ec840f4a635b1cde90861974b3d27260055681e46355380"} Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.210347 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.214643 5108 generic.go:358] "Generic (PLEG): container finished" podID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerID="cce3129a6b10e3206d1bfa397d217c541479996920fce3acbffdc859e578343c" exitCode=0 Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.214757 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" event={"ID":"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72","Type":"ContainerDied","Data":"cce3129a6b10e3206d1bfa397d217c541479996920fce3acbffdc859e578343c"} Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.232613 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-5cwsw" event={"ID":"5a2116a4-eb62-4e6e-99f5-22d8dfed008a","Type":"ContainerStarted","Data":"03299e6d65e6492a6206803f3913f64433eba72309b027bc75a25b47fb8d1d0c"} Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.244253 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-sqk2p" podStartSLOduration=3.898751416 podStartE2EDuration="33.244225202s" podCreationTimestamp="2026-01-04 00:23:45 +0000 UTC" firstStartedPulling="2026-01-04 00:23:47.1587582 +0000 UTC m=+801.147323286" lastFinishedPulling="2026-01-04 00:24:16.504231986 +0000 UTC m=+830.492797072" observedRunningTime="2026-01-04 00:24:18.240741525 +0000 UTC m=+832.229306611" watchObservedRunningTime="2026-01-04 00:24:18.244225202 +0000 UTC m=+832.232790298" Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.280711 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-4srpf" podStartSLOduration=3.194449221 podStartE2EDuration="29.280670476s" podCreationTimestamp="2026-01-04 00:23:49 +0000 UTC" firstStartedPulling="2026-01-04 00:23:50.742822565 +0000 UTC m=+804.731387651" lastFinishedPulling="2026-01-04 00:24:16.82904382 +0000 UTC m=+830.817608906" observedRunningTime="2026-01-04 00:24:18.27901378 +0000 UTC m=+832.267578876" watchObservedRunningTime="2026-01-04 00:24:18.280670476 +0000 UTC m=+832.269235582" Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.302823 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-r52pf" podStartSLOduration=4.041855527 podStartE2EDuration="33.302801312s" podCreationTimestamp="2026-01-04 00:23:45 +0000 UTC" firstStartedPulling="2026-01-04 00:23:47.24431328 +0000 UTC m=+801.232878366" lastFinishedPulling="2026-01-04 00:24:16.505259065 +0000 UTC m=+830.493824151" observedRunningTime="2026-01-04 00:24:18.301758182 +0000 UTC m=+832.290323288" watchObservedRunningTime="2026-01-04 00:24:18.302801312 +0000 UTC m=+832.291366408" Jan 04 00:24:18 crc kubenswrapper[5108]: I0104 00:24:18.328915 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7687c6569-97jkv" podStartSLOduration=4.66041364 podStartE2EDuration="33.328893257s" podCreationTimestamp="2026-01-04 00:23:45 +0000 UTC" firstStartedPulling="2026-01-04 00:23:47.158543494 +0000 UTC m=+801.147108570" lastFinishedPulling="2026-01-04 00:24:15.827023101 +0000 UTC m=+829.815588187" observedRunningTime="2026-01-04 00:24:18.322744616 +0000 UTC m=+832.311309712" watchObservedRunningTime="2026-01-04 00:24:18.328893257 +0000 UTC m=+832.317458343" Jan 04 00:24:20 crc kubenswrapper[5108]: I0104 00:24:20.286764 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:24:20 crc kubenswrapper[5108]: I0104 00:24:20.296116 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-5cwsw" Jan 04 00:24:20 crc kubenswrapper[5108]: I0104 00:24:20.312538 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-5cwsw" podStartSLOduration=6.098209211 podStartE2EDuration="35.312513058s" podCreationTimestamp="2026-01-04 00:23:45 +0000 UTC" firstStartedPulling="2026-01-04 00:23:47.289926749 +0000 UTC m=+801.278491835" lastFinishedPulling="2026-01-04 00:24:16.504230596 +0000 UTC m=+830.492795682" observedRunningTime="2026-01-04 00:24:20.312239511 +0000 UTC m=+834.300804607" watchObservedRunningTime="2026-01-04 00:24:20.312513058 +0000 UTC m=+834.301078144" Jan 04 00:24:21 crc kubenswrapper[5108]: I0104 00:24:21.258100 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" event={"ID":"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72","Type":"ContainerStarted","Data":"6843f14abb5f0624d1e16e5cb5b25caf858c19879642f359d805b5e90e6e0cb5"} Jan 04 00:24:21 crc kubenswrapper[5108]: I0104 00:24:21.282694 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" podStartSLOduration=10.863312634 podStartE2EDuration="42.282666241s" podCreationTimestamp="2026-01-04 00:23:39 +0000 UTC" firstStartedPulling="2026-01-04 00:23:44.40573719 +0000 UTC m=+798.394302276" lastFinishedPulling="2026-01-04 00:24:15.825090797 +0000 UTC m=+829.813655883" observedRunningTime="2026-01-04 00:24:21.279885624 +0000 UTC m=+835.268450710" watchObservedRunningTime="2026-01-04 00:24:21.282666241 +0000 UTC m=+835.271231337" Jan 04 00:24:22 crc kubenswrapper[5108]: I0104 00:24:22.266395 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458104-rknh4" event={"ID":"2a194bab-cf96-4d6b-b9f6-60bdc5c57621","Type":"ContainerStarted","Data":"91fab3741e0e41eb9ff0379c59b7dbdf4fbd5f18e24da388b85f838afe832e92"} Jan 04 00:24:22 crc kubenswrapper[5108]: I0104 00:24:22.271252 5108 generic.go:358] "Generic (PLEG): container finished" podID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerID="6843f14abb5f0624d1e16e5cb5b25caf858c19879642f359d805b5e90e6e0cb5" exitCode=0 Jan 04 00:24:22 crc kubenswrapper[5108]: I0104 00:24:22.271663 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" event={"ID":"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72","Type":"ContainerDied","Data":"6843f14abb5f0624d1e16e5cb5b25caf858c19879642f359d805b5e90e6e0cb5"} Jan 04 00:24:22 crc kubenswrapper[5108]: I0104 00:24:22.330477 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29458104-rknh4" podStartSLOduration=18.894782035 podStartE2EDuration="22.330451463s" podCreationTimestamp="2026-01-04 00:24:00 +0000 UTC" firstStartedPulling="2026-01-04 00:24:16.975758501 +0000 UTC m=+830.964323587" lastFinishedPulling="2026-01-04 00:24:20.411427929 +0000 UTC m=+834.399993015" observedRunningTime="2026-01-04 00:24:22.325247159 +0000 UTC m=+836.313812265" watchObservedRunningTime="2026-01-04 00:24:22.330451463 +0000 UTC m=+836.319016549" Jan 04 00:24:23 crc kubenswrapper[5108]: I0104 00:24:23.282685 5108 generic.go:358] "Generic (PLEG): container finished" podID="2a194bab-cf96-4d6b-b9f6-60bdc5c57621" containerID="91fab3741e0e41eb9ff0379c59b7dbdf4fbd5f18e24da388b85f838afe832e92" exitCode=0 Jan 04 00:24:23 crc kubenswrapper[5108]: I0104 00:24:23.282816 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458104-rknh4" event={"ID":"2a194bab-cf96-4d6b-b9f6-60bdc5c57621","Type":"ContainerDied","Data":"91fab3741e0e41eb9ff0379c59b7dbdf4fbd5f18e24da388b85f838afe832e92"} Jan 04 00:24:24 crc kubenswrapper[5108]: I0104 00:24:24.422663 5108 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-7llq6 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 04 00:24:24 crc kubenswrapper[5108]: I0104 00:24:24.423362 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-7llq6" podUID="0f0b110c-a11e-4e78-8e42-10c104fcf868" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.190253 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.194954 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458104-rknh4" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.285868 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb2vx\" (UniqueName: \"kubernetes.io/projected/2a194bab-cf96-4d6b-b9f6-60bdc5c57621-kube-api-access-zb2vx\") pod \"2a194bab-cf96-4d6b-b9f6-60bdc5c57621\" (UID: \"2a194bab-cf96-4d6b-b9f6-60bdc5c57621\") " Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.286060 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpqjn\" (UniqueName: \"kubernetes.io/projected/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-kube-api-access-tpqjn\") pod \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.286216 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-bundle\") pod \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.286271 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-util\") pod \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\" (UID: \"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72\") " Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.287711 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-bundle" (OuterVolumeSpecName: "bundle") pod "e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" (UID: "e2e9b244-16b4-4e6b-a6cf-e82f0d019f72"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.294923 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a194bab-cf96-4d6b-b9f6-60bdc5c57621-kube-api-access-zb2vx" (OuterVolumeSpecName: "kube-api-access-zb2vx") pod "2a194bab-cf96-4d6b-b9f6-60bdc5c57621" (UID: "2a194bab-cf96-4d6b-b9f6-60bdc5c57621"). InnerVolumeSpecName "kube-api-access-zb2vx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.294986 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-kube-api-access-tpqjn" (OuterVolumeSpecName: "kube-api-access-tpqjn") pod "e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" (UID: "e2e9b244-16b4-4e6b-a6cf-e82f0d019f72"). InnerVolumeSpecName "kube-api-access-tpqjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.297829 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-util" (OuterVolumeSpecName: "util") pod "e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" (UID: "e2e9b244-16b4-4e6b-a6cf-e82f0d019f72"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.299469 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458104-rknh4" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.299501 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458104-rknh4" event={"ID":"2a194bab-cf96-4d6b-b9f6-60bdc5c57621","Type":"ContainerDied","Data":"c30ac569f3f61337ba27707fbb547585dae537492891523e4dfe6ad4ac949385"} Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.299562 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c30ac569f3f61337ba27707fbb547585dae537492891523e4dfe6ad4ac949385" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.302207 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" event={"ID":"e2e9b244-16b4-4e6b-a6cf-e82f0d019f72","Type":"ContainerDied","Data":"0a85a886710f227b8dbb7c4511a1cce752874e039149cc86c8c635c05ef410a6"} Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.302327 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a85a886710f227b8dbb7c4511a1cce752874e039149cc86c8c635c05ef410a6" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.302464 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.352250 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458098-5vhdx"] Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.358950 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458098-5vhdx"] Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.387851 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zb2vx\" (UniqueName: \"kubernetes.io/projected/2a194bab-cf96-4d6b-b9f6-60bdc5c57621-kube-api-access-zb2vx\") on node \"crc\" DevicePath \"\"" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.387895 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tpqjn\" (UniqueName: \"kubernetes.io/projected/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-kube-api-access-tpqjn\") on node \"crc\" DevicePath \"\"" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.387907 5108 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-bundle\") on node \"crc\" DevicePath \"\"" Jan 04 00:24:25 crc kubenswrapper[5108]: I0104 00:24:25.387919 5108 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2e9b244-16b4-4e6b-a6cf-e82f0d019f72-util\") on node \"crc\" DevicePath \"\"" Jan 04 00:24:26 crc kubenswrapper[5108]: I0104 00:24:26.478705 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="059ddc1f-fb99-4798-a5cf-c91d217c2763" path="/var/lib/kubelet/pods/059ddc1f-fb99-4798-a5cf-c91d217c2763/volumes" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.011425 5108 scope.go:117] "RemoveContainer" containerID="f9066031eef2a52fcb566a68d4929f8db3dde5dfa8247dae8078a6e3831d64ed" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.326129 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7977f944b6-cmftp" event={"ID":"ae32f373-1730-41c4-9061-ff7573625e17","Type":"ContainerStarted","Data":"f9149029da71050883d9cc3626bb085069413133f26a69c0546e3578e18d8b0c"} Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.349342 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7977f944b6-cmftp" podStartSLOduration=25.747464325 podStartE2EDuration="35.349318615s" podCreationTimestamp="2026-01-04 00:23:52 +0000 UTC" firstStartedPulling="2026-01-04 00:24:16.859899639 +0000 UTC m=+830.848464725" lastFinishedPulling="2026-01-04 00:24:26.461753929 +0000 UTC m=+840.450319015" observedRunningTime="2026-01-04 00:24:27.344735028 +0000 UTC m=+841.333300134" watchObservedRunningTime="2026-01-04 00:24:27.349318615 +0000 UTC m=+841.337883701" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.843002 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.843942 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="extract-utilities" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.843961 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="extract-utilities" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.843972 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="registry-server" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.843979 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="registry-server" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.843999 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerName="pull" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844007 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerName="pull" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844032 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="extract-content" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844039 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="extract-content" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844051 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerName="util" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844058 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerName="util" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844068 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a194bab-cf96-4d6b-b9f6-60bdc5c57621" containerName="oc" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844075 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a194bab-cf96-4d6b-b9f6-60bdc5c57621" containerName="oc" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844102 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerName="extract" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844110 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerName="extract" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844262 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="bfdb0a0d-28f5-46d9-ac35-c1e733798ded" containerName="registry-server" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844282 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2e9b244-16b4-4e6b-a6cf-e82f0d019f72" containerName="extract" Jan 04 00:24:27 crc kubenswrapper[5108]: I0104 00:24:27.844297 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2a194bab-cf96-4d6b-b9f6-60bdc5c57621" containerName="oc" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.092389 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.097434 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.098404 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.098427 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.098767 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.099608 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.099670 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.099671 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.099900 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-g8pgd\"" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.103072 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.104327 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162020 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162081 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/8a56d552-f484-43ef-9f02-ea72cc80b853-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162166 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162221 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162247 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162280 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162314 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162627 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162777 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162887 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162929 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.162956 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.163003 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.163055 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.163094 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273061 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273139 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273179 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273235 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273274 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273344 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273398 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273438 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273502 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273527 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273567 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273594 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273635 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273667 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.273683 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/8a56d552-f484-43ef-9f02-ea72cc80b853-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.274750 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.274890 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.275660 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.275868 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.276962 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.277915 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.278336 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.279648 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.283500 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.283754 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.284299 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/8a56d552-f484-43ef-9f02-ea72cc80b853-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.284450 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.284491 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.284641 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.285637 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/8a56d552-f484-43ef-9f02-ea72cc80b853-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"8a56d552-f484-43ef-9f02-ea72cc80b853\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.424166 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:24:30 crc kubenswrapper[5108]: I0104 00:24:30.721550 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 04 00:24:31 crc kubenswrapper[5108]: I0104 00:24:31.261979 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-r52pf" Jan 04 00:24:31 crc kubenswrapper[5108]: I0104 00:24:31.359259 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8a56d552-f484-43ef-9f02-ea72cc80b853","Type":"ContainerStarted","Data":"2da641cecee8aec760d0691e7e7e6a214b9b4f4ae282bb53ac385afa6ab5606a"} Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.592813 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4"] Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.612679 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.616247 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-9kz66\"" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.616585 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.616618 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.633436 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc8rq\" (UniqueName: \"kubernetes.io/projected/05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32-kube-api-access-lc8rq\") pod \"cert-manager-operator-controller-manager-64c74584c4-9gsd4\" (UID: \"05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.633986 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-9gsd4\" (UID: \"05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.639965 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4"] Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.736307 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lc8rq\" (UniqueName: \"kubernetes.io/projected/05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32-kube-api-access-lc8rq\") pod \"cert-manager-operator-controller-manager-64c74584c4-9gsd4\" (UID: \"05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.736399 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-9gsd4\" (UID: \"05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.737156 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-9gsd4\" (UID: \"05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.766278 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc8rq\" (UniqueName: \"kubernetes.io/projected/05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32-kube-api-access-lc8rq\") pod \"cert-manager-operator-controller-manager-64c74584c4-9gsd4\" (UID: \"05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" Jan 04 00:24:35 crc kubenswrapper[5108]: I0104 00:24:35.945709 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" Jan 04 00:24:36 crc kubenswrapper[5108]: I0104 00:24:36.336635 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4"] Jan 04 00:24:36 crc kubenswrapper[5108]: W0104 00:24:36.348891 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05fea4b4_f9d0_4e32_83dc_2e3bd6fa9f32.slice/crio-f4feef58351564363f46fa485698481c0cbc5b995a5671f31532d3fea0cee716 WatchSource:0}: Error finding container f4feef58351564363f46fa485698481c0cbc5b995a5671f31532d3fea0cee716: Status 404 returned error can't find the container with id f4feef58351564363f46fa485698481c0cbc5b995a5671f31532d3fea0cee716 Jan 04 00:24:36 crc kubenswrapper[5108]: I0104 00:24:36.406946 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" event={"ID":"05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32","Type":"ContainerStarted","Data":"f4feef58351564363f46fa485698481c0cbc5b995a5671f31532d3fea0cee716"} Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.589368 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.622411 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.622622 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.625935 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.626253 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xhpgk\"" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.626403 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.626525 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711305 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711368 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711405 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8j99\" (UniqueName: \"kubernetes.io/projected/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-kube-api-access-g8j99\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711435 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-push\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711488 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711525 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711547 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711581 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711604 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711624 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711682 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.711729 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814485 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814578 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814616 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814653 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814685 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814708 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814760 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814811 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814845 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814887 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.814918 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g8j99\" (UniqueName: \"kubernetes.io/projected/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-kube-api-access-g8j99\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.815745 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.816731 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.816817 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-push\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.816881 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.816949 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.819656 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.820726 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.826781 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.834101 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-push\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.839774 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.840053 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.840727 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8j99\" (UniqueName: \"kubernetes.io/projected/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-kube-api-access-g8j99\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.841852 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:39 crc kubenswrapper[5108]: I0104 00:24:39.962222 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:24:42 crc kubenswrapper[5108]: I0104 00:24:42.975479 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 04 00:24:42 crc kubenswrapper[5108]: W0104 00:24:42.996743 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaebe66d9_24a7_4f6d_b6ee_2584ad3a766b.slice/crio-dfe8d95198c7c59876b4fe19be48ee169905476215e25a4413600cb44400dbad WatchSource:0}: Error finding container dfe8d95198c7c59876b4fe19be48ee169905476215e25a4413600cb44400dbad: Status 404 returned error can't find the container with id dfe8d95198c7c59876b4fe19be48ee169905476215e25a4413600cb44400dbad Jan 04 00:24:43 crc kubenswrapper[5108]: I0104 00:24:43.494097 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b","Type":"ContainerStarted","Data":"dfe8d95198c7c59876b4fe19be48ee169905476215e25a4413600cb44400dbad"} Jan 04 00:24:49 crc kubenswrapper[5108]: I0104 00:24:49.743280 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 04 00:24:51 crc kubenswrapper[5108]: I0104 00:24:51.494549 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.927981 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.928302 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.931305 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.931728 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.932875 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.934218 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.934406 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.934577 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.934678 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.934761 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz292\" (UniqueName: \"kubernetes.io/projected/7fcecf95-bd73-4870-93fe-683ba5d5b655-kube-api-access-jz292\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.934898 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.934937 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.934968 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-push\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.934997 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.935120 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.935162 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:52 crc kubenswrapper[5108]: I0104 00:24:52.935196 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037240 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037314 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037344 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037404 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037541 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037754 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037822 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037850 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037893 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jz292\" (UniqueName: \"kubernetes.io/projected/7fcecf95-bd73-4870-93fe-683ba5d5b655-kube-api-access-jz292\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037961 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037979 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.037996 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.038036 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-push\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.038109 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.038130 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.038531 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.038584 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.038609 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.038708 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.039160 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.039300 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.045488 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.045679 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-push\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.059054 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz292\" (UniqueName: \"kubernetes.io/projected/7fcecf95-bd73-4870-93fe-683ba5d5b655-kube-api-access-jz292\") pod \"service-telemetry-operator-2-build\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:53 crc kubenswrapper[5108]: I0104 00:24:53.253270 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:24:54 crc kubenswrapper[5108]: I0104 00:24:54.918081 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:24:54 crc kubenswrapper[5108]: I0104 00:24:54.918234 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:25:14 crc kubenswrapper[5108]: I0104 00:25:14.209420 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8a56d552-f484-43ef-9f02-ea72cc80b853","Type":"ContainerStarted","Data":"1dcc707caea8f7c722633a6975fc3154fb7e9539cc35bf22a4ece8fa86592333"} Jan 04 00:25:14 crc kubenswrapper[5108]: I0104 00:25:14.211619 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" event={"ID":"05fea4b4-f9d0-4e32-83dc-2e3bd6fa9f32","Type":"ContainerStarted","Data":"8ba1cdbe8fc9dadab082ddfa01176f21397aa045304a3d488f1e031a7aed5be4"} Jan 04 00:25:14 crc kubenswrapper[5108]: I0104 00:25:14.233658 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 04 00:25:14 crc kubenswrapper[5108]: W0104 00:25:14.249250 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fcecf95_bd73_4870_93fe_683ba5d5b655.slice/crio-49055881efe740584a678718e7b70d85f2982f772081b0aeef951aba778e922d WatchSource:0}: Error finding container 49055881efe740584a678718e7b70d85f2982f772081b0aeef951aba778e922d: Status 404 returned error can't find the container with id 49055881efe740584a678718e7b70d85f2982f772081b0aeef951aba778e922d Jan 04 00:25:14 crc kubenswrapper[5108]: I0104 00:25:14.301602 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9gsd4" podStartSLOduration=2.430214634 podStartE2EDuration="39.301576443s" podCreationTimestamp="2026-01-04 00:24:35 +0000 UTC" firstStartedPulling="2026-01-04 00:24:36.351931072 +0000 UTC m=+850.340496158" lastFinishedPulling="2026-01-04 00:25:13.223292871 +0000 UTC m=+887.211857967" observedRunningTime="2026-01-04 00:25:14.297963214 +0000 UTC m=+888.286528320" watchObservedRunningTime="2026-01-04 00:25:14.301576443 +0000 UTC m=+888.290141529" Jan 04 00:25:14 crc kubenswrapper[5108]: I0104 00:25:14.362238 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 04 00:25:14 crc kubenswrapper[5108]: I0104 00:25:14.405408 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.233895 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b","Type":"ContainerStarted","Data":"310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85"} Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.234002 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" containerName="manage-dockerfile" containerID="cri-o://310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85" gracePeriod=30 Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.247643 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"7fcecf95-bd73-4870-93fe-683ba5d5b655","Type":"ContainerStarted","Data":"5109d710e6487800bd76d6ef0aeac17b70fa0c543d93ed8f7ac67336ccb57321"} Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.247713 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"7fcecf95-bd73-4870-93fe-683ba5d5b655","Type":"ContainerStarted","Data":"49055881efe740584a678718e7b70d85f2982f772081b0aeef951aba778e922d"} Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.775509 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_aebe66d9-24a7-4f6d-b6ee-2584ad3a766b/manage-dockerfile/0.log" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.776019 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.873832 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-blob-cache\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.873929 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-node-pullsecrets\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.873965 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-proxy-ca-bundles\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.873990 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-pull\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874031 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-run\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874059 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-push\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874104 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildworkdir\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874130 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874266 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildcachedir\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874340 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-ca-bundles\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874394 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-system-configs\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874491 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-root\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874539 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8j99\" (UniqueName: \"kubernetes.io/projected/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-kube-api-access-g8j99\") pod \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\" (UID: \"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b\") " Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874888 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.874886 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.875013 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.875467 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.875729 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.875736 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.875837 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.876288 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.876356 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.884981 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.885997 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-kube-api-access-g8j99" (OuterVolumeSpecName: "kube-api-access-g8j99") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "kube-api-access-g8j99". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.887132 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" (UID: "aebe66d9-24a7-4f6d-b6ee-2584ad3a766b"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976233 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976554 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976567 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976575 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g8j99\" (UniqueName: \"kubernetes.io/projected/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-kube-api-access-g8j99\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976583 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976591 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976602 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976610 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976618 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976626 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:15 crc kubenswrapper[5108]: I0104 00:25:15.976634 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.257634 5108 generic.go:358] "Generic (PLEG): container finished" podID="8a56d552-f484-43ef-9f02-ea72cc80b853" containerID="1dcc707caea8f7c722633a6975fc3154fb7e9539cc35bf22a4ece8fa86592333" exitCode=0 Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.257777 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8a56d552-f484-43ef-9f02-ea72cc80b853","Type":"ContainerDied","Data":"1dcc707caea8f7c722633a6975fc3154fb7e9539cc35bf22a4ece8fa86592333"} Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.260152 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_aebe66d9-24a7-4f6d-b6ee-2584ad3a766b/manage-dockerfile/0.log" Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.260189 5108 generic.go:358] "Generic (PLEG): container finished" podID="aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" containerID="310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85" exitCode=1 Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.260349 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.260425 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b","Type":"ContainerDied","Data":"310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85"} Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.260489 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"aebe66d9-24a7-4f6d-b6ee-2584ad3a766b","Type":"ContainerDied","Data":"dfe8d95198c7c59876b4fe19be48ee169905476215e25a4413600cb44400dbad"} Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.260511 5108 scope.go:117] "RemoveContainer" containerID="310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85" Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.289340 5108 scope.go:117] "RemoveContainer" containerID="310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85" Jan 04 00:25:16 crc kubenswrapper[5108]: E0104 00:25:16.289937 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85\": container with ID starting with 310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85 not found: ID does not exist" containerID="310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85" Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.289985 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85"} err="failed to get container status \"310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85\": rpc error: code = NotFound desc = could not find container \"310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85\": container with ID starting with 310dd695261b71ae2e0c2d5b6e698edfebdb6d377c0cc74d72a4ce004f787c85 not found: ID does not exist" Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.311918 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.324701 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 04 00:25:16 crc kubenswrapper[5108]: I0104 00:25:16.458240 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" path="/var/lib/kubelet/pods/aebe66d9-24a7-4f6d-b6ee-2584ad3a766b/volumes" Jan 04 00:25:17 crc kubenswrapper[5108]: I0104 00:25:17.271499 5108 generic.go:358] "Generic (PLEG): container finished" podID="8a56d552-f484-43ef-9f02-ea72cc80b853" containerID="0363e156cb80848feac7f5d8cccfcb58f12cd1d6da697a1b34eaf9746d40d627" exitCode=0 Jan 04 00:25:17 crc kubenswrapper[5108]: I0104 00:25:17.271593 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8a56d552-f484-43ef-9f02-ea72cc80b853","Type":"ContainerDied","Data":"0363e156cb80848feac7f5d8cccfcb58f12cd1d6da697a1b34eaf9746d40d627"} Jan 04 00:25:18 crc kubenswrapper[5108]: I0104 00:25:18.288839 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8a56d552-f484-43ef-9f02-ea72cc80b853","Type":"ContainerStarted","Data":"61afffa4a9a8a6028458ad10bd3d35661dfd654cc2e14423bad3fef3bc455122"} Jan 04 00:25:18 crc kubenswrapper[5108]: I0104 00:25:18.289657 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:25:18 crc kubenswrapper[5108]: I0104 00:25:18.337985 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=8.244831774 podStartE2EDuration="51.33796223s" podCreationTimestamp="2026-01-04 00:24:27 +0000 UTC" firstStartedPulling="2026-01-04 00:24:30.748000443 +0000 UTC m=+844.736565529" lastFinishedPulling="2026-01-04 00:25:13.841130899 +0000 UTC m=+887.829695985" observedRunningTime="2026-01-04 00:25:18.334649459 +0000 UTC m=+892.323214555" watchObservedRunningTime="2026-01-04 00:25:18.33796223 +0000 UTC m=+892.326527316" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.447855 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-72zhx"] Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.449629 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" containerName="manage-dockerfile" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.449667 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" containerName="manage-dockerfile" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.449838 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="aebe66d9-24a7-4f6d-b6ee-2584ad3a766b" containerName="manage-dockerfile" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.612228 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.615798 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-7l699\"" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.616169 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.616380 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.624729 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-72zhx"] Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.689744 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf"] Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.755794 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64fc2ae4-d44c-4843-9750-971e567d50c3-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-72zhx\" (UID: \"64fc2ae4-d44c-4843-9750-971e567d50c3\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.755873 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqg54\" (UniqueName: \"kubernetes.io/projected/64fc2ae4-d44c-4843-9750-971e567d50c3-kube-api-access-kqg54\") pod \"cert-manager-webhook-7894b5b9b4-72zhx\" (UID: \"64fc2ae4-d44c-4843-9750-971e567d50c3\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.857690 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kqg54\" (UniqueName: \"kubernetes.io/projected/64fc2ae4-d44c-4843-9750-971e567d50c3-kube-api-access-kqg54\") pod \"cert-manager-webhook-7894b5b9b4-72zhx\" (UID: \"64fc2ae4-d44c-4843-9750-971e567d50c3\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.857816 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64fc2ae4-d44c-4843-9750-971e567d50c3-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-72zhx\" (UID: \"64fc2ae4-d44c-4843-9750-971e567d50c3\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.890541 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64fc2ae4-d44c-4843-9750-971e567d50c3-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-72zhx\" (UID: \"64fc2ae4-d44c-4843-9750-971e567d50c3\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.891809 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf"] Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.892024 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.892969 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqg54\" (UniqueName: \"kubernetes.io/projected/64fc2ae4-d44c-4843-9750-971e567d50c3-kube-api-access-kqg54\") pod \"cert-manager-webhook-7894b5b9b4-72zhx\" (UID: \"64fc2ae4-d44c-4843-9750-971e567d50c3\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.894724 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-sw889\"" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.934268 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.959919 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/331877d2-3f29-4eac-897c-010b1d98fda4-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-cnrpf\" (UID: \"331877d2-3f29-4eac-897c-010b1d98fda4\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" Jan 04 00:25:20 crc kubenswrapper[5108]: I0104 00:25:20.960134 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bdg4\" (UniqueName: \"kubernetes.io/projected/331877d2-3f29-4eac-897c-010b1d98fda4-kube-api-access-9bdg4\") pod \"cert-manager-cainjector-7dbf76d5c8-cnrpf\" (UID: \"331877d2-3f29-4eac-897c-010b1d98fda4\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" Jan 04 00:25:21 crc kubenswrapper[5108]: I0104 00:25:21.061609 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/331877d2-3f29-4eac-897c-010b1d98fda4-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-cnrpf\" (UID: \"331877d2-3f29-4eac-897c-010b1d98fda4\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" Jan 04 00:25:21 crc kubenswrapper[5108]: I0104 00:25:21.062188 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9bdg4\" (UniqueName: \"kubernetes.io/projected/331877d2-3f29-4eac-897c-010b1d98fda4-kube-api-access-9bdg4\") pod \"cert-manager-cainjector-7dbf76d5c8-cnrpf\" (UID: \"331877d2-3f29-4eac-897c-010b1d98fda4\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" Jan 04 00:25:21 crc kubenswrapper[5108]: I0104 00:25:21.144552 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/331877d2-3f29-4eac-897c-010b1d98fda4-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-cnrpf\" (UID: \"331877d2-3f29-4eac-897c-010b1d98fda4\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" Jan 04 00:25:21 crc kubenswrapper[5108]: I0104 00:25:21.144689 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bdg4\" (UniqueName: \"kubernetes.io/projected/331877d2-3f29-4eac-897c-010b1d98fda4-kube-api-access-9bdg4\") pod \"cert-manager-cainjector-7dbf76d5c8-cnrpf\" (UID: \"331877d2-3f29-4eac-897c-010b1d98fda4\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" Jan 04 00:25:21 crc kubenswrapper[5108]: I0104 00:25:21.268350 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" Jan 04 00:25:21 crc kubenswrapper[5108]: I0104 00:25:21.564517 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-72zhx"] Jan 04 00:25:21 crc kubenswrapper[5108]: I0104 00:25:21.821505 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf"] Jan 04 00:25:22 crc kubenswrapper[5108]: I0104 00:25:22.528301 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" event={"ID":"64fc2ae4-d44c-4843-9750-971e567d50c3","Type":"ContainerStarted","Data":"49d95d7c39b8edddb77e5ddd4396091a4955bfc27a144ff0f224d72704e72d6b"} Jan 04 00:25:22 crc kubenswrapper[5108]: I0104 00:25:22.707878 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" event={"ID":"331877d2-3f29-4eac-897c-010b1d98fda4","Type":"ContainerStarted","Data":"867ef1ebb7b50f35234d14f801fe255b8776c26fd627612825e3272b0840e3c0"} Jan 04 00:25:23 crc kubenswrapper[5108]: I0104 00:25:23.721461 5108 generic.go:358] "Generic (PLEG): container finished" podID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerID="5109d710e6487800bd76d6ef0aeac17b70fa0c543d93ed8f7ac67336ccb57321" exitCode=0 Jan 04 00:25:23 crc kubenswrapper[5108]: I0104 00:25:23.721799 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"7fcecf95-bd73-4870-93fe-683ba5d5b655","Type":"ContainerDied","Data":"5109d710e6487800bd76d6ef0aeac17b70fa0c543d93ed8f7ac67336ccb57321"} Jan 04 00:25:24 crc kubenswrapper[5108]: I0104 00:25:24.744960 5108 generic.go:358] "Generic (PLEG): container finished" podID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerID="5abf935cd8888ee4ca4392b50a359b18531300a6a36de92b723ed86c01f9179f" exitCode=0 Jan 04 00:25:24 crc kubenswrapper[5108]: I0104 00:25:24.745040 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"7fcecf95-bd73-4870-93fe-683ba5d5b655","Type":"ContainerDied","Data":"5abf935cd8888ee4ca4392b50a359b18531300a6a36de92b723ed86c01f9179f"} Jan 04 00:25:24 crc kubenswrapper[5108]: I0104 00:25:24.788322 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_7fcecf95-bd73-4870-93fe-683ba5d5b655/manage-dockerfile/0.log" Jan 04 00:25:24 crc kubenswrapper[5108]: I0104 00:25:24.917822 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:25:24 crc kubenswrapper[5108]: I0104 00:25:24.918733 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:25:25 crc kubenswrapper[5108]: I0104 00:25:25.764810 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"7fcecf95-bd73-4870-93fe-683ba5d5b655","Type":"ContainerStarted","Data":"23817013ee042466ce38e131cee8a9e191f37a19d9e225f3ac00285a85369431"} Jan 04 00:25:25 crc kubenswrapper[5108]: I0104 00:25:25.803997 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=34.803965237 podStartE2EDuration="34.803965237s" podCreationTimestamp="2026-01-04 00:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:25:25.802304612 +0000 UTC m=+899.790869718" watchObservedRunningTime="2026-01-04 00:25:25.803965237 +0000 UTC m=+899.792530333" Jan 04 00:25:26 crc kubenswrapper[5108]: I0104 00:25:26.817169 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:25:26 crc kubenswrapper[5108]: I0104 00:25:26.817406 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:25:26 crc kubenswrapper[5108]: I0104 00:25:26.840295 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:25:26 crc kubenswrapper[5108]: I0104 00:25:26.840592 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:25:29 crc kubenswrapper[5108]: I0104 00:25:29.756360 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="8a56d552-f484-43ef-9f02-ea72cc80b853" containerName="elasticsearch" probeResult="failure" output=< Jan 04 00:25:29 crc kubenswrapper[5108]: {"timestamp": "2026-01-04T00:25:29+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 04 00:25:29 crc kubenswrapper[5108]: > Jan 04 00:25:35 crc kubenswrapper[5108]: I0104 00:25:35.208040 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="8a56d552-f484-43ef-9f02-ea72cc80b853" containerName="elasticsearch" probeResult="failure" output=< Jan 04 00:25:35 crc kubenswrapper[5108]: {"timestamp": "2026-01-04T00:25:35+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 04 00:25:35 crc kubenswrapper[5108]: > Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.116171 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-4kp96"] Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.172879 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-4kp96"] Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.172998 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-4kp96" Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.178670 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-zcdd2\"" Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.297285 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f6660297-af47-40ae-b909-73f073b53693-bound-sa-token\") pod \"cert-manager-858d87f86b-4kp96\" (UID: \"f6660297-af47-40ae-b909-73f073b53693\") " pod="cert-manager/cert-manager-858d87f86b-4kp96" Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.297426 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/f6660297-af47-40ae-b909-73f073b53693-kube-api-access-8gdn6\") pod \"cert-manager-858d87f86b-4kp96\" (UID: \"f6660297-af47-40ae-b909-73f073b53693\") " pod="cert-manager/cert-manager-858d87f86b-4kp96" Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.400038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f6660297-af47-40ae-b909-73f073b53693-bound-sa-token\") pod \"cert-manager-858d87f86b-4kp96\" (UID: \"f6660297-af47-40ae-b909-73f073b53693\") " pod="cert-manager/cert-manager-858d87f86b-4kp96" Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.400636 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/f6660297-af47-40ae-b909-73f073b53693-kube-api-access-8gdn6\") pod \"cert-manager-858d87f86b-4kp96\" (UID: \"f6660297-af47-40ae-b909-73f073b53693\") " pod="cert-manager/cert-manager-858d87f86b-4kp96" Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.432979 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f6660297-af47-40ae-b909-73f073b53693-bound-sa-token\") pod \"cert-manager-858d87f86b-4kp96\" (UID: \"f6660297-af47-40ae-b909-73f073b53693\") " pod="cert-manager/cert-manager-858d87f86b-4kp96" Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.433541 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gdn6\" (UniqueName: \"kubernetes.io/projected/f6660297-af47-40ae-b909-73f073b53693-kube-api-access-8gdn6\") pod \"cert-manager-858d87f86b-4kp96\" (UID: \"f6660297-af47-40ae-b909-73f073b53693\") " pod="cert-manager/cert-manager-858d87f86b-4kp96" Jan 04 00:25:37 crc kubenswrapper[5108]: I0104 00:25:37.499930 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-4kp96" Jan 04 00:25:39 crc kubenswrapper[5108]: I0104 00:25:39.771481 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="8a56d552-f484-43ef-9f02-ea72cc80b853" containerName="elasticsearch" probeResult="failure" output=< Jan 04 00:25:39 crc kubenswrapper[5108]: {"timestamp": "2026-01-04T00:25:39+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 04 00:25:39 crc kubenswrapper[5108]: > Jan 04 00:25:44 crc kubenswrapper[5108]: I0104 00:25:44.950768 5108 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="8a56d552-f484-43ef-9f02-ea72cc80b853" containerName="elasticsearch" probeResult="failure" output=< Jan 04 00:25:44 crc kubenswrapper[5108]: {"timestamp": "2026-01-04T00:25:44+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 04 00:25:44 crc kubenswrapper[5108]: > Jan 04 00:25:50 crc kubenswrapper[5108]: I0104 00:25:50.169300 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.080481 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" event={"ID":"64fc2ae4-d44c-4843-9750-971e567d50c3","Type":"ContainerStarted","Data":"61feac3e2514afb31d3403e32cecae2377413ba02ba899adffdb7878aa2fa4db"} Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.082699 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.083426 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" event={"ID":"331877d2-3f29-4eac-897c-010b1d98fda4","Type":"ContainerStarted","Data":"0f1bcbf2cbccb5dc9a171a7802bae79076559a0f97ac401c546aa9e8653b50ff"} Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.100968 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" podStartSLOduration=2.05671211 podStartE2EDuration="34.10094758s" podCreationTimestamp="2026-01-04 00:25:20 +0000 UTC" firstStartedPulling="2026-01-04 00:25:21.575778248 +0000 UTC m=+895.564343334" lastFinishedPulling="2026-01-04 00:25:53.620013718 +0000 UTC m=+927.608578804" observedRunningTime="2026-01-04 00:25:54.098465002 +0000 UTC m=+928.087030098" watchObservedRunningTime="2026-01-04 00:25:54.10094758 +0000 UTC m=+928.089512666" Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.120266 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-cnrpf" podStartSLOduration=2.372389223 podStartE2EDuration="34.120238385s" podCreationTimestamp="2026-01-04 00:25:20 +0000 UTC" firstStartedPulling="2026-01-04 00:25:21.830860842 +0000 UTC m=+895.819425938" lastFinishedPulling="2026-01-04 00:25:53.578710014 +0000 UTC m=+927.567275100" observedRunningTime="2026-01-04 00:25:54.116283128 +0000 UTC m=+928.104848214" watchObservedRunningTime="2026-01-04 00:25:54.120238385 +0000 UTC m=+928.108803481" Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.287517 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-4kp96"] Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.917863 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.918539 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.918613 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.919769 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8dc27842f4ece5439b06d6ce112671ad3f7bc8894f51d9a8d835c365dc97f45"} pod="openshift-machine-config-operator/machine-config-daemon-njl5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 04 00:25:54 crc kubenswrapper[5108]: I0104 00:25:54.919843 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" containerID="cri-o://c8dc27842f4ece5439b06d6ce112671ad3f7bc8894f51d9a8d835c365dc97f45" gracePeriod=600 Jan 04 00:25:55 crc kubenswrapper[5108]: I0104 00:25:55.094454 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-4kp96" event={"ID":"f6660297-af47-40ae-b909-73f073b53693","Type":"ContainerStarted","Data":"5ea0fb991befb437d062aa7b675006a5ef46bc6a7e5b81e38060301c1f8937fe"} Jan 04 00:25:55 crc kubenswrapper[5108]: I0104 00:25:55.094509 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-4kp96" event={"ID":"f6660297-af47-40ae-b909-73f073b53693","Type":"ContainerStarted","Data":"e2d3a2de454c19164e8f661a502eadf2f47c21684c29481adeacfe2aa4b61bc4"} Jan 04 00:25:56 crc kubenswrapper[5108]: I0104 00:25:56.104161 5108 generic.go:358] "Generic (PLEG): container finished" podID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerID="c8dc27842f4ece5439b06d6ce112671ad3f7bc8894f51d9a8d835c365dc97f45" exitCode=0 Jan 04 00:25:56 crc kubenswrapper[5108]: I0104 00:25:56.104262 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerDied","Data":"c8dc27842f4ece5439b06d6ce112671ad3f7bc8894f51d9a8d835c365dc97f45"} Jan 04 00:25:56 crc kubenswrapper[5108]: I0104 00:25:56.104908 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"d315c271b5ebb5ccd4137805a4c0a0f8051b40ee81c1c5c36d5b609914f2eb07"} Jan 04 00:25:56 crc kubenswrapper[5108]: I0104 00:25:56.104941 5108 scope.go:117] "RemoveContainer" containerID="335e8dafd09ef6d4b5814847b54a00f48c49785e811fdaed2b4bdcd55dc20429" Jan 04 00:25:56 crc kubenswrapper[5108]: I0104 00:25:56.125321 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-4kp96" podStartSLOduration=19.125295966 podStartE2EDuration="19.125295966s" podCreationTimestamp="2026-01-04 00:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:25:55.131683288 +0000 UTC m=+929.120248384" watchObservedRunningTime="2026-01-04 00:25:56.125295966 +0000 UTC m=+930.113861052" Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.141365 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458106-mttmg"] Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.149728 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458106-mttmg" Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.150961 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458106-mttmg"] Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.160695 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.160803 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.161911 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.180899 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7grz2\" (UniqueName: \"kubernetes.io/projected/dea28c0a-3424-4943-adfa-182583f45b2b-kube-api-access-7grz2\") pod \"auto-csr-approver-29458106-mttmg\" (UID: \"dea28c0a-3424-4943-adfa-182583f45b2b\") " pod="openshift-infra/auto-csr-approver-29458106-mttmg" Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.282925 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7grz2\" (UniqueName: \"kubernetes.io/projected/dea28c0a-3424-4943-adfa-182583f45b2b-kube-api-access-7grz2\") pod \"auto-csr-approver-29458106-mttmg\" (UID: \"dea28c0a-3424-4943-adfa-182583f45b2b\") " pod="openshift-infra/auto-csr-approver-29458106-mttmg" Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.477616 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7grz2\" (UniqueName: \"kubernetes.io/projected/dea28c0a-3424-4943-adfa-182583f45b2b-kube-api-access-7grz2\") pod \"auto-csr-approver-29458106-mttmg\" (UID: \"dea28c0a-3424-4943-adfa-182583f45b2b\") " pod="openshift-infra/auto-csr-approver-29458106-mttmg" Jan 04 00:26:00 crc kubenswrapper[5108]: I0104 00:26:00.480497 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458106-mttmg" Jan 04 00:26:01 crc kubenswrapper[5108]: I0104 00:26:01.110950 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-72zhx" Jan 04 00:26:01 crc kubenswrapper[5108]: I0104 00:26:01.214872 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458106-mttmg"] Jan 04 00:26:02 crc kubenswrapper[5108]: I0104 00:26:02.198890 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458106-mttmg" event={"ID":"dea28c0a-3424-4943-adfa-182583f45b2b","Type":"ContainerStarted","Data":"6165d7cdefe07e430313f5a0a99acbcfdf65ebd346aed8b4a96db466ea51d26c"} Jan 04 00:26:03 crc kubenswrapper[5108]: I0104 00:26:03.210493 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458106-mttmg" event={"ID":"dea28c0a-3424-4943-adfa-182583f45b2b","Type":"ContainerStarted","Data":"35ddd83f39d73de7f5efce4a5a158390e11f945ba1065af1dd0a58cb6d71a35f"} Jan 04 00:26:03 crc kubenswrapper[5108]: I0104 00:26:03.241623 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29458106-mttmg" podStartSLOduration=2.016627978 podStartE2EDuration="3.241597283s" podCreationTimestamp="2026-01-04 00:26:00 +0000 UTC" firstStartedPulling="2026-01-04 00:26:01.244171261 +0000 UTC m=+935.232736347" lastFinishedPulling="2026-01-04 00:26:02.469140566 +0000 UTC m=+936.457705652" observedRunningTime="2026-01-04 00:26:03.23373805 +0000 UTC m=+937.222303156" watchObservedRunningTime="2026-01-04 00:26:03.241597283 +0000 UTC m=+937.230162389" Jan 04 00:26:04 crc kubenswrapper[5108]: I0104 00:26:04.222030 5108 generic.go:358] "Generic (PLEG): container finished" podID="dea28c0a-3424-4943-adfa-182583f45b2b" containerID="35ddd83f39d73de7f5efce4a5a158390e11f945ba1065af1dd0a58cb6d71a35f" exitCode=0 Jan 04 00:26:04 crc kubenswrapper[5108]: I0104 00:26:04.222174 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458106-mttmg" event={"ID":"dea28c0a-3424-4943-adfa-182583f45b2b","Type":"ContainerDied","Data":"35ddd83f39d73de7f5efce4a5a158390e11f945ba1065af1dd0a58cb6d71a35f"} Jan 04 00:26:05 crc kubenswrapper[5108]: I0104 00:26:05.710497 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458106-mttmg" Jan 04 00:26:05 crc kubenswrapper[5108]: I0104 00:26:05.816641 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7grz2\" (UniqueName: \"kubernetes.io/projected/dea28c0a-3424-4943-adfa-182583f45b2b-kube-api-access-7grz2\") pod \"dea28c0a-3424-4943-adfa-182583f45b2b\" (UID: \"dea28c0a-3424-4943-adfa-182583f45b2b\") " Jan 04 00:26:05 crc kubenswrapper[5108]: I0104 00:26:05.825831 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea28c0a-3424-4943-adfa-182583f45b2b-kube-api-access-7grz2" (OuterVolumeSpecName: "kube-api-access-7grz2") pod "dea28c0a-3424-4943-adfa-182583f45b2b" (UID: "dea28c0a-3424-4943-adfa-182583f45b2b"). InnerVolumeSpecName "kube-api-access-7grz2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:26:05 crc kubenswrapper[5108]: I0104 00:26:05.918793 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7grz2\" (UniqueName: \"kubernetes.io/projected/dea28c0a-3424-4943-adfa-182583f45b2b-kube-api-access-7grz2\") on node \"crc\" DevicePath \"\"" Jan 04 00:26:06 crc kubenswrapper[5108]: I0104 00:26:06.242987 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458106-mttmg" Jan 04 00:26:06 crc kubenswrapper[5108]: I0104 00:26:06.242999 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458106-mttmg" event={"ID":"dea28c0a-3424-4943-adfa-182583f45b2b","Type":"ContainerDied","Data":"6165d7cdefe07e430313f5a0a99acbcfdf65ebd346aed8b4a96db466ea51d26c"} Jan 04 00:26:06 crc kubenswrapper[5108]: I0104 00:26:06.243077 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6165d7cdefe07e430313f5a0a99acbcfdf65ebd346aed8b4a96db466ea51d26c" Jan 04 00:26:06 crc kubenswrapper[5108]: I0104 00:26:06.304830 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458100-t857c"] Jan 04 00:26:06 crc kubenswrapper[5108]: I0104 00:26:06.311718 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458100-t857c"] Jan 04 00:26:06 crc kubenswrapper[5108]: I0104 00:26:06.459394 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a" path="/var/lib/kubelet/pods/53d370cf-82cf-4cf1-9fb5-8bb5a4cb7b9a/volumes" Jan 04 00:26:30 crc kubenswrapper[5108]: I0104 00:26:30.128311 5108 scope.go:117] "RemoveContainer" containerID="2ff78851b11fc0a028c6db8544eab5c51ff187424527b341b724a10a42d50636" Jan 04 00:27:31 crc kubenswrapper[5108]: I0104 00:27:31.065695 5108 generic.go:358] "Generic (PLEG): container finished" podID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerID="23817013ee042466ce38e131cee8a9e191f37a19d9e225f3ac00285a85369431" exitCode=0 Jan 04 00:27:31 crc kubenswrapper[5108]: I0104 00:27:31.065775 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"7fcecf95-bd73-4870-93fe-683ba5d5b655","Type":"ContainerDied","Data":"23817013ee042466ce38e131cee8a9e191f37a19d9e225f3ac00285a85369431"} Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.464760 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.574627 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildworkdir\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.574672 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildcachedir\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.574700 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-proxy-ca-bundles\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.574737 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-blob-cache\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.574807 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-push\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.574863 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-pull\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.574904 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-ca-bundles\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.574904 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.574932 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-run\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.575077 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-root\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.575288 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-node-pullsecrets\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.575408 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-system-configs\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.575510 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz292\" (UniqueName: \"kubernetes.io/projected/7fcecf95-bd73-4870-93fe-683ba5d5b655-kube-api-access-jz292\") pod \"7fcecf95-bd73-4870-93fe-683ba5d5b655\" (UID: \"7fcecf95-bd73-4870-93fe-683ba5d5b655\") " Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.575545 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.576448 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.576673 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.577483 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.577722 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.577756 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.577773 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.577786 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.577797 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.582180 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.582274 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcecf95-bd73-4870-93fe-683ba5d5b655-kube-api-access-jz292" (OuterVolumeSpecName: "kube-api-access-jz292") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "kube-api-access-jz292". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.582765 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.582810 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.619556 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.679588 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.679634 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.679646 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.679655 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/7fcecf95-bd73-4870-93fe-683ba5d5b655-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.679665 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jz292\" (UniqueName: \"kubernetes.io/projected/7fcecf95-bd73-4870-93fe-683ba5d5b655-kube-api-access-jz292\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.769448 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:27:32 crc kubenswrapper[5108]: I0104 00:27:32.781708 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:33 crc kubenswrapper[5108]: I0104 00:27:33.085332 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"7fcecf95-bd73-4870-93fe-683ba5d5b655","Type":"ContainerDied","Data":"49055881efe740584a678718e7b70d85f2982f772081b0aeef951aba778e922d"} Jan 04 00:27:33 crc kubenswrapper[5108]: I0104 00:27:33.085419 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49055881efe740584a678718e7b70d85f2982f772081b0aeef951aba778e922d" Jan 04 00:27:33 crc kubenswrapper[5108]: I0104 00:27:33.085456 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 04 00:27:34 crc kubenswrapper[5108]: I0104 00:27:34.612930 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7fcecf95-bd73-4870-93fe-683ba5d5b655" (UID: "7fcecf95-bd73-4870-93fe-683ba5d5b655"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:27:34 crc kubenswrapper[5108]: I0104 00:27:34.712686 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7fcecf95-bd73-4870-93fe-683ba5d5b655-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519030 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519729 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerName="git-clone" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519742 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerName="git-clone" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519753 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerName="manage-dockerfile" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519758 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerName="manage-dockerfile" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519775 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerName="docker-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519782 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerName="docker-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519788 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dea28c0a-3424-4943-adfa-182583f45b2b" containerName="oc" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519795 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea28c0a-3424-4943-adfa-182583f45b2b" containerName="oc" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519892 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="7fcecf95-bd73-4870-93fe-683ba5d5b655" containerName="docker-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.519903 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="dea28c0a-3424-4943-adfa-182583f45b2b" containerName="oc" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.789528 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.789732 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.792724 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.792816 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.792999 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xhpgk\"" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.793946 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872047 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872105 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzff8\" (UniqueName: \"kubernetes.io/projected/66b12279-09e5-4f12-8373-9d9af29cb6ab-kube-api-access-pzff8\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872140 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872364 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872496 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872528 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872590 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872641 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872681 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-push\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872724 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872761 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.872833 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.974558 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.974632 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.974805 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.974870 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.974907 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.974993 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-push\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975047 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975074 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975086 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975092 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975171 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975240 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975257 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pzff8\" (UniqueName: \"kubernetes.io/projected/66b12279-09e5-4f12-8373-9d9af29cb6ab-kube-api-access-pzff8\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975284 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975330 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975929 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975936 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.975979 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.976400 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.976748 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.976851 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.981970 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-push\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.984373 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:37 crc kubenswrapper[5108]: I0104 00:27:37.996912 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzff8\" (UniqueName: \"kubernetes.io/projected/66b12279-09e5-4f12-8373-9d9af29cb6ab-kube-api-access-pzff8\") pod \"smart-gateway-operator-1-build\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:38 crc kubenswrapper[5108]: I0104 00:27:38.109684 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:38 crc kubenswrapper[5108]: I0104 00:27:38.334526 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 04 00:27:38 crc kubenswrapper[5108]: I0104 00:27:38.341274 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 04 00:27:39 crc kubenswrapper[5108]: I0104 00:27:39.146082 5108 generic.go:358] "Generic (PLEG): container finished" podID="66b12279-09e5-4f12-8373-9d9af29cb6ab" containerID="3dc2c61872631971813899dcfd80ac583763ab2d3100559e16e5702a0fc4279b" exitCode=0 Jan 04 00:27:39 crc kubenswrapper[5108]: I0104 00:27:39.146240 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"66b12279-09e5-4f12-8373-9d9af29cb6ab","Type":"ContainerDied","Data":"3dc2c61872631971813899dcfd80ac583763ab2d3100559e16e5702a0fc4279b"} Jan 04 00:27:39 crc kubenswrapper[5108]: I0104 00:27:39.149070 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"66b12279-09e5-4f12-8373-9d9af29cb6ab","Type":"ContainerStarted","Data":"9183008a056d44005d9d028b0d02fbc61ac029ef346438b179413f3523e090c6"} Jan 04 00:27:40 crc kubenswrapper[5108]: I0104 00:27:40.160572 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"66b12279-09e5-4f12-8373-9d9af29cb6ab","Type":"ContainerStarted","Data":"b0e0ead781573b1f9789549dd9072bc1d8f9138f7ec159ab43fb82fd468d0542"} Jan 04 00:27:40 crc kubenswrapper[5108]: I0104 00:27:40.195821 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.195802973 podStartE2EDuration="3.195802973s" podCreationTimestamp="2026-01-04 00:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:27:40.192006279 +0000 UTC m=+1034.180571465" watchObservedRunningTime="2026-01-04 00:27:40.195802973 +0000 UTC m=+1034.184368059" Jan 04 00:27:48 crc kubenswrapper[5108]: I0104 00:27:48.265737 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 04 00:27:48 crc kubenswrapper[5108]: I0104 00:27:48.266698 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="66b12279-09e5-4f12-8373-9d9af29cb6ab" containerName="docker-build" containerID="cri-o://b0e0ead781573b1f9789549dd9072bc1d8f9138f7ec159ab43fb82fd468d0542" gracePeriod=30 Jan 04 00:27:49 crc kubenswrapper[5108]: I0104 00:27:49.940388 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.182527 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.182660 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.188077 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.189590 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.189988 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.296407 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.297050 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.297151 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.297845 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.298326 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.298441 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.298529 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj98j\" (UniqueName: \"kubernetes.io/projected/921cb481-bab6-43e8-b32d-b394c75dd47a-kube-api-access-nj98j\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.298658 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.298765 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.298877 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-push\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.298975 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.299105 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.402795 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.402860 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.402892 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nj98j\" (UniqueName: \"kubernetes.io/projected/921cb481-bab6-43e8-b32d-b394c75dd47a-kube-api-access-nj98j\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.402933 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.402965 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.402993 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-push\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.403017 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.403051 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.403081 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.403124 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.403177 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.403239 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.404245 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.404876 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.405263 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.405292 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.405424 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.405514 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.405658 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.405722 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.406401 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.416489 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.416528 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-push\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.423400 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj98j\" (UniqueName: \"kubernetes.io/projected/921cb481-bab6-43e8-b32d-b394c75dd47a-kube-api-access-nj98j\") pod \"smart-gateway-operator-2-build\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.497678 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_66b12279-09e5-4f12-8373-9d9af29cb6ab/docker-build/0.log" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.498926 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.525281 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606483 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-proxy-ca-bundles\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606662 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-blob-cache\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606745 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-run\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606802 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildcachedir\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606835 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzff8\" (UniqueName: \"kubernetes.io/projected/66b12279-09e5-4f12-8373-9d9af29cb6ab-kube-api-access-pzff8\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606866 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-ca-bundles\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606892 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-push\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606920 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-pull\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606946 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-system-configs\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606979 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-root\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.606999 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.607775 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.608058 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildworkdir\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.608242 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.608246 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.608277 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.608642 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.609679 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.609759 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-node-pullsecrets\") pod \"66b12279-09e5-4f12-8373-9d9af29cb6ab\" (UID: \"66b12279-09e5-4f12-8373-9d9af29cb6ab\") " Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.610470 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.610501 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.610516 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.610532 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/66b12279-09e5-4f12-8373-9d9af29cb6ab-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.610546 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.610558 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.610572 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.612291 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.612293 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.612319 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b12279-09e5-4f12-8373-9d9af29cb6ab-kube-api-access-pzff8" (OuterVolumeSpecName: "kube-api-access-pzff8") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "kube-api-access-pzff8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.612602 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.712168 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pzff8\" (UniqueName: \"kubernetes.io/projected/66b12279-09e5-4f12-8373-9d9af29cb6ab-kube-api-access-pzff8\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.712261 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.712280 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/66b12279-09e5-4f12-8373-9d9af29cb6ab-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.712296 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.773474 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.773857 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_66b12279-09e5-4f12-8373-9d9af29cb6ab/docker-build/0.log" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.775642 5108 generic.go:358] "Generic (PLEG): container finished" podID="66b12279-09e5-4f12-8373-9d9af29cb6ab" containerID="b0e0ead781573b1f9789549dd9072bc1d8f9138f7ec159ab43fb82fd468d0542" exitCode=1 Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.775749 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"66b12279-09e5-4f12-8373-9d9af29cb6ab","Type":"ContainerDied","Data":"b0e0ead781573b1f9789549dd9072bc1d8f9138f7ec159ab43fb82fd468d0542"} Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.775797 5108 scope.go:117] "RemoveContainer" containerID="b0e0ead781573b1f9789549dd9072bc1d8f9138f7ec159ab43fb82fd468d0542" Jan 04 00:27:52 crc kubenswrapper[5108]: W0104 00:27:52.784741 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod921cb481_bab6_43e8_b32d_b394c75dd47a.slice/crio-a90438cc0f47ff85edb8c397d8c81651196af4a6b2fb7895fbf68a7e53a94934 WatchSource:0}: Error finding container a90438cc0f47ff85edb8c397d8c81651196af4a6b2fb7895fbf68a7e53a94934: Status 404 returned error can't find the container with id a90438cc0f47ff85edb8c397d8c81651196af4a6b2fb7895fbf68a7e53a94934 Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.821154 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "66b12279-09e5-4f12-8373-9d9af29cb6ab" (UID: "66b12279-09e5-4f12-8373-9d9af29cb6ab"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.857910 5108 scope.go:117] "RemoveContainer" containerID="3dc2c61872631971813899dcfd80ac583763ab2d3100559e16e5702a0fc4279b" Jan 04 00:27:52 crc kubenswrapper[5108]: I0104 00:27:52.915242 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/66b12279-09e5-4f12-8373-9d9af29cb6ab-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:27:53 crc kubenswrapper[5108]: I0104 00:27:53.786356 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"921cb481-bab6-43e8-b32d-b394c75dd47a","Type":"ContainerStarted","Data":"cca9f7a5fcf0a69931ff14e3dd40f87e1b95b275d2a225c44b6fea2fd41a7a8d"} Jan 04 00:27:53 crc kubenswrapper[5108]: I0104 00:27:53.787006 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"921cb481-bab6-43e8-b32d-b394c75dd47a","Type":"ContainerStarted","Data":"a90438cc0f47ff85edb8c397d8c81651196af4a6b2fb7895fbf68a7e53a94934"} Jan 04 00:27:53 crc kubenswrapper[5108]: I0104 00:27:53.789385 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"66b12279-09e5-4f12-8373-9d9af29cb6ab","Type":"ContainerDied","Data":"9183008a056d44005d9d028b0d02fbc61ac029ef346438b179413f3523e090c6"} Jan 04 00:27:53 crc kubenswrapper[5108]: I0104 00:27:53.789475 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 04 00:27:53 crc kubenswrapper[5108]: I0104 00:27:53.838937 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 04 00:27:53 crc kubenswrapper[5108]: I0104 00:27:53.846083 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 04 00:27:54 crc kubenswrapper[5108]: I0104 00:27:54.459616 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b12279-09e5-4f12-8373-9d9af29cb6ab" path="/var/lib/kubelet/pods/66b12279-09e5-4f12-8373-9d9af29cb6ab/volumes" Jan 04 00:27:54 crc kubenswrapper[5108]: I0104 00:27:54.798608 5108 generic.go:358] "Generic (PLEG): container finished" podID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerID="cca9f7a5fcf0a69931ff14e3dd40f87e1b95b275d2a225c44b6fea2fd41a7a8d" exitCode=0 Jan 04 00:27:54 crc kubenswrapper[5108]: I0104 00:27:54.800504 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"921cb481-bab6-43e8-b32d-b394c75dd47a","Type":"ContainerDied","Data":"cca9f7a5fcf0a69931ff14e3dd40f87e1b95b275d2a225c44b6fea2fd41a7a8d"} Jan 04 00:27:55 crc kubenswrapper[5108]: I0104 00:27:55.807930 5108 generic.go:358] "Generic (PLEG): container finished" podID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerID="a12c7a5ee948ac1bc0b108d688f58a10e5a3fd280345ff281317278681f42adb" exitCode=0 Jan 04 00:27:55 crc kubenswrapper[5108]: I0104 00:27:55.808038 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"921cb481-bab6-43e8-b32d-b394c75dd47a","Type":"ContainerDied","Data":"a12c7a5ee948ac1bc0b108d688f58a10e5a3fd280345ff281317278681f42adb"} Jan 04 00:27:55 crc kubenswrapper[5108]: I0104 00:27:55.863180 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_921cb481-bab6-43e8-b32d-b394c75dd47a/manage-dockerfile/0.log" Jan 04 00:27:56 crc kubenswrapper[5108]: I0104 00:27:56.819675 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"921cb481-bab6-43e8-b32d-b394c75dd47a","Type":"ContainerStarted","Data":"125d56fcc5ca4c44e1d1b111ddac6155dcd791acb1e6866db09e5dc02e18ae49"} Jan 04 00:27:56 crc kubenswrapper[5108]: I0104 00:27:56.854663 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=7.854637614 podStartE2EDuration="7.854637614s" podCreationTimestamp="2026-01-04 00:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:27:56.848130086 +0000 UTC m=+1050.836695192" watchObservedRunningTime="2026-01-04 00:27:56.854637614 +0000 UTC m=+1050.843202700" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.140967 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458108-r6gfw"] Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.142500 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="66b12279-09e5-4f12-8373-9d9af29cb6ab" containerName="manage-dockerfile" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.142522 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b12279-09e5-4f12-8373-9d9af29cb6ab" containerName="manage-dockerfile" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.142540 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="66b12279-09e5-4f12-8373-9d9af29cb6ab" containerName="docker-build" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.142547 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b12279-09e5-4f12-8373-9d9af29cb6ab" containerName="docker-build" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.142688 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="66b12279-09e5-4f12-8373-9d9af29cb6ab" containerName="docker-build" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.147493 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458108-r6gfw" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.150233 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.151435 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.152049 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.166974 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458108-r6gfw"] Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.246613 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdxll\" (UniqueName: \"kubernetes.io/projected/068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1-kube-api-access-qdxll\") pod \"auto-csr-approver-29458108-r6gfw\" (UID: \"068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1\") " pod="openshift-infra/auto-csr-approver-29458108-r6gfw" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.348639 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdxll\" (UniqueName: \"kubernetes.io/projected/068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1-kube-api-access-qdxll\") pod \"auto-csr-approver-29458108-r6gfw\" (UID: \"068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1\") " pod="openshift-infra/auto-csr-approver-29458108-r6gfw" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.369497 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdxll\" (UniqueName: \"kubernetes.io/projected/068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1-kube-api-access-qdxll\") pod \"auto-csr-approver-29458108-r6gfw\" (UID: \"068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1\") " pod="openshift-infra/auto-csr-approver-29458108-r6gfw" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.467504 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458108-r6gfw" Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.690227 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458108-r6gfw"] Jan 04 00:28:00 crc kubenswrapper[5108]: W0104 00:28:00.690745 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068d5f06_a4c5_46a7_ac2a_7ea19fce3ed1.slice/crio-57f0dccbb681b404cf4eb7e081b1e741b9f2c0ab8772b2b1c57a5a5b64b24304 WatchSource:0}: Error finding container 57f0dccbb681b404cf4eb7e081b1e741b9f2c0ab8772b2b1c57a5a5b64b24304: Status 404 returned error can't find the container with id 57f0dccbb681b404cf4eb7e081b1e741b9f2c0ab8772b2b1c57a5a5b64b24304 Jan 04 00:28:00 crc kubenswrapper[5108]: I0104 00:28:00.855001 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458108-r6gfw" event={"ID":"068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1","Type":"ContainerStarted","Data":"57f0dccbb681b404cf4eb7e081b1e741b9f2c0ab8772b2b1c57a5a5b64b24304"} Jan 04 00:28:08 crc kubenswrapper[5108]: I0104 00:28:08.080529 5108 generic.go:358] "Generic (PLEG): container finished" podID="068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1" containerID="ffbf071e4332cfa038f9bb3c89ba0e184332d2b0cb660826aaa9e1ffb2807727" exitCode=0 Jan 04 00:28:08 crc kubenswrapper[5108]: I0104 00:28:08.081555 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458108-r6gfw" event={"ID":"068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1","Type":"ContainerDied","Data":"ffbf071e4332cfa038f9bb3c89ba0e184332d2b0cb660826aaa9e1ffb2807727"} Jan 04 00:28:09 crc kubenswrapper[5108]: I0104 00:28:09.365011 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458108-r6gfw" Jan 04 00:28:09 crc kubenswrapper[5108]: I0104 00:28:09.429313 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdxll\" (UniqueName: \"kubernetes.io/projected/068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1-kube-api-access-qdxll\") pod \"068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1\" (UID: \"068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1\") " Jan 04 00:28:09 crc kubenswrapper[5108]: I0104 00:28:09.439322 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1-kube-api-access-qdxll" (OuterVolumeSpecName: "kube-api-access-qdxll") pod "068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1" (UID: "068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1"). InnerVolumeSpecName "kube-api-access-qdxll". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:28:09 crc kubenswrapper[5108]: I0104 00:28:09.531125 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qdxll\" (UniqueName: \"kubernetes.io/projected/068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1-kube-api-access-qdxll\") on node \"crc\" DevicePath \"\"" Jan 04 00:28:10 crc kubenswrapper[5108]: I0104 00:28:10.098272 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458108-r6gfw" Jan 04 00:28:10 crc kubenswrapper[5108]: I0104 00:28:10.098343 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458108-r6gfw" event={"ID":"068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1","Type":"ContainerDied","Data":"57f0dccbb681b404cf4eb7e081b1e741b9f2c0ab8772b2b1c57a5a5b64b24304"} Jan 04 00:28:10 crc kubenswrapper[5108]: I0104 00:28:10.098769 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57f0dccbb681b404cf4eb7e081b1e741b9f2c0ab8772b2b1c57a5a5b64b24304" Jan 04 00:28:10 crc kubenswrapper[5108]: I0104 00:28:10.445470 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458102-msftx"] Jan 04 00:28:10 crc kubenswrapper[5108]: I0104 00:28:10.462885 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458102-msftx"] Jan 04 00:28:12 crc kubenswrapper[5108]: I0104 00:28:12.456116 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d23c37b5-6c23-48f9-960a-a9c174d8430c" path="/var/lib/kubelet/pods/d23c37b5-6c23-48f9-960a-a9c174d8430c/volumes" Jan 04 00:28:24 crc kubenswrapper[5108]: I0104 00:28:24.917308 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:28:24 crc kubenswrapper[5108]: I0104 00:28:24.920308 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:28:30 crc kubenswrapper[5108]: I0104 00:28:30.282093 5108 scope.go:117] "RemoveContainer" containerID="18abcf584a10658b74f08503746f145aa65528f4db2db21b58910df46c712b62" Jan 04 00:28:54 crc kubenswrapper[5108]: I0104 00:28:54.917260 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:28:54 crc kubenswrapper[5108]: I0104 00:28:54.918167 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:29:24 crc kubenswrapper[5108]: I0104 00:29:24.917440 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:29:24 crc kubenswrapper[5108]: I0104 00:29:24.918415 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:29:24 crc kubenswrapper[5108]: I0104 00:29:24.918484 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:29:24 crc kubenswrapper[5108]: I0104 00:29:24.919189 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d315c271b5ebb5ccd4137805a4c0a0f8051b40ee81c1c5c36d5b609914f2eb07"} pod="openshift-machine-config-operator/machine-config-daemon-njl5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 04 00:29:24 crc kubenswrapper[5108]: I0104 00:29:24.919279 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" containerID="cri-o://d315c271b5ebb5ccd4137805a4c0a0f8051b40ee81c1c5c36d5b609914f2eb07" gracePeriod=600 Jan 04 00:29:26 crc kubenswrapper[5108]: I0104 00:29:26.097083 5108 generic.go:358] "Generic (PLEG): container finished" podID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerID="d315c271b5ebb5ccd4137805a4c0a0f8051b40ee81c1c5c36d5b609914f2eb07" exitCode=0 Jan 04 00:29:26 crc kubenswrapper[5108]: I0104 00:29:26.097136 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerDied","Data":"d315c271b5ebb5ccd4137805a4c0a0f8051b40ee81c1c5c36d5b609914f2eb07"} Jan 04 00:29:26 crc kubenswrapper[5108]: I0104 00:29:26.097755 5108 scope.go:117] "RemoveContainer" containerID="c8dc27842f4ece5439b06d6ce112671ad3f7bc8894f51d9a8d835c365dc97f45" Jan 04 00:29:27 crc kubenswrapper[5108]: I0104 00:29:27.110241 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"bad0ea277fd94911974fbd9c4fb75c82a3196517d30a4e258eccd8f7cc79a379"} Jan 04 00:29:28 crc kubenswrapper[5108]: I0104 00:29:28.142422 5108 generic.go:358] "Generic (PLEG): container finished" podID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerID="125d56fcc5ca4c44e1d1b111ddac6155dcd791acb1e6866db09e5dc02e18ae49" exitCode=0 Jan 04 00:29:28 crc kubenswrapper[5108]: I0104 00:29:28.142535 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"921cb481-bab6-43e8-b32d-b394c75dd47a","Type":"ContainerDied","Data":"125d56fcc5ca4c44e1d1b111ddac6155dcd791acb1e6866db09e5dc02e18ae49"} Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.476444 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.485911 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj98j\" (UniqueName: \"kubernetes.io/projected/921cb481-bab6-43e8-b32d-b394c75dd47a-kube-api-access-nj98j\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486070 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-run\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486107 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-node-pullsecrets\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486159 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-build-blob-cache\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486272 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-proxy-ca-bundles\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486269 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486299 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-buildworkdir\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486362 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-buildcachedir\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486440 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-root\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486527 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-pull\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486571 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-push\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486645 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-system-configs\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.486739 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-ca-bundles\") pod \"921cb481-bab6-43e8-b32d-b394c75dd47a\" (UID: \"921cb481-bab6-43e8-b32d-b394c75dd47a\") " Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.487167 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.488413 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.488469 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.488466 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.489341 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.491004 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.491557 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.499973 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.500015 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921cb481-bab6-43e8-b32d-b394c75dd47a-kube-api-access-nj98j" (OuterVolumeSpecName: "kube-api-access-nj98j") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "kube-api-access-nj98j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.500468 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.588830 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.588905 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.588927 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/921cb481-bab6-43e8-b32d-b394c75dd47a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.588945 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.588970 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/921cb481-bab6-43e8-b32d-b394c75dd47a-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.588989 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.589006 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921cb481-bab6-43e8-b32d-b394c75dd47a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.589024 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nj98j\" (UniqueName: \"kubernetes.io/projected/921cb481-bab6-43e8-b32d-b394c75dd47a-kube-api-access-nj98j\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.589040 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.705805 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:29:29 crc kubenswrapper[5108]: I0104 00:29:29.793547 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:30 crc kubenswrapper[5108]: I0104 00:29:30.167517 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"921cb481-bab6-43e8-b32d-b394c75dd47a","Type":"ContainerDied","Data":"a90438cc0f47ff85edb8c397d8c81651196af4a6b2fb7895fbf68a7e53a94934"} Jan 04 00:29:30 crc kubenswrapper[5108]: I0104 00:29:30.167581 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90438cc0f47ff85edb8c397d8c81651196af4a6b2fb7895fbf68a7e53a94934" Jan 04 00:29:30 crc kubenswrapper[5108]: I0104 00:29:30.167582 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 04 00:29:30 crc kubenswrapper[5108]: I0104 00:29:30.419605 5108 scope.go:117] "RemoveContainer" containerID="ca8357eab86483cb33c5ce3e80ba8c5610eab7e73c8eb7d4910fd5000a8c8a29" Jan 04 00:29:30 crc kubenswrapper[5108]: I0104 00:29:30.443134 5108 scope.go:117] "RemoveContainer" containerID="f5d1691770f63ef1ad58f03c2c00ffbac8b4776b50aaddb65a37ffc81b306ff5" Jan 04 00:29:30 crc kubenswrapper[5108]: I0104 00:29:30.465018 5108 scope.go:117] "RemoveContainer" containerID="39f992438ba9c77f299c8db5b09aed6bf13183fbbe06b5e4f4e53ef87878afc4" Jan 04 00:29:31 crc kubenswrapper[5108]: I0104 00:29:31.604179 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "921cb481-bab6-43e8-b32d-b394c75dd47a" (UID: "921cb481-bab6-43e8-b32d-b394c75dd47a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:29:31 crc kubenswrapper[5108]: I0104 00:29:31.624216 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/921cb481-bab6-43e8-b32d-b394c75dd47a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.104137 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105662 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerName="manage-dockerfile" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105691 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerName="manage-dockerfile" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105723 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1" containerName="oc" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105733 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1" containerName="oc" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105757 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerName="docker-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105766 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerName="docker-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105776 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerName="git-clone" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105783 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerName="git-clone" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105913 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="921cb481-bab6-43e8-b32d-b394c75dd47a" containerName="docker-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.105927 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1" containerName="oc" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.545347 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.545669 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.548998 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.549150 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.549365 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.553227 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xhpgk\"" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.568946 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-buildworkdir\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.569393 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-system-configs\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.569493 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-root\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.569652 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-pull\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.569879 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.570331 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-buildcachedir\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.570475 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-push\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.570578 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.570648 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.570726 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vz9x\" (UniqueName: \"kubernetes.io/projected/1f744689-6cbf-4773-bb52-8912257dfcda-kube-api-access-6vz9x\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.570847 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-run\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.571019 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672712 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-run\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672790 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672824 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-buildworkdir\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672849 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-system-configs\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672870 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-root\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672894 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-pull\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672931 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672956 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-buildcachedir\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672971 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-push\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.672990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.673008 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.673028 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vz9x\" (UniqueName: \"kubernetes.io/projected/1f744689-6cbf-4773-bb52-8912257dfcda-kube-api-access-6vz9x\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.674193 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-buildworkdir\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.674332 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-buildcachedir\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.674593 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-run\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.674716 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.674817 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-root\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.675062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.675466 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-system-configs\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.675690 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.676053 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.681790 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-pull\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.684653 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-push\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.696330 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vz9x\" (UniqueName: \"kubernetes.io/projected/1f744689-6cbf-4773-bb52-8912257dfcda-kube-api-access-6vz9x\") pod \"sg-core-1-build\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " pod="service-telemetry/sg-core-1-build" Jan 04 00:29:34 crc kubenswrapper[5108]: I0104 00:29:34.862788 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 04 00:29:35 crc kubenswrapper[5108]: I0104 00:29:35.112581 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 04 00:29:35 crc kubenswrapper[5108]: I0104 00:29:35.236269 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1f744689-6cbf-4773-bb52-8912257dfcda","Type":"ContainerStarted","Data":"9f84672bf163adbd22dc6e28d025d57887a1d6f145fec739ebc459226d99d792"} Jan 04 00:29:37 crc kubenswrapper[5108]: I0104 00:29:37.257130 5108 generic.go:358] "Generic (PLEG): container finished" podID="1f744689-6cbf-4773-bb52-8912257dfcda" containerID="658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe" exitCode=0 Jan 04 00:29:37 crc kubenswrapper[5108]: I0104 00:29:37.257246 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1f744689-6cbf-4773-bb52-8912257dfcda","Type":"ContainerDied","Data":"658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe"} Jan 04 00:29:38 crc kubenswrapper[5108]: I0104 00:29:38.274126 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1f744689-6cbf-4773-bb52-8912257dfcda","Type":"ContainerStarted","Data":"58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365"} Jan 04 00:29:39 crc kubenswrapper[5108]: I0104 00:29:39.308105 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=5.308085046 podStartE2EDuration="5.308085046s" podCreationTimestamp="2026-01-04 00:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:29:39.304339814 +0000 UTC m=+1153.292904910" watchObservedRunningTime="2026-01-04 00:29:39.308085046 +0000 UTC m=+1153.296650122" Jan 04 00:29:44 crc kubenswrapper[5108]: I0104 00:29:44.498419 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 04 00:29:44 crc kubenswrapper[5108]: I0104 00:29:44.499541 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="1f744689-6cbf-4773-bb52-8912257dfcda" containerName="docker-build" containerID="cri-o://58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365" gracePeriod=30 Jan 04 00:29:44 crc kubenswrapper[5108]: I0104 00:29:44.948610 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_1f744689-6cbf-4773-bb52-8912257dfcda/docker-build/0.log" Jan 04 00:29:44 crc kubenswrapper[5108]: I0104 00:29:44.949691 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.058676 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-buildcachedir\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.058807 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.058844 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-run\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059018 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-build-blob-cache\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059057 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-pull\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059095 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-node-pullsecrets\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059145 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-push\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059183 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059270 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-buildworkdir\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059387 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-system-configs\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059412 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vz9x\" (UniqueName: \"kubernetes.io/projected/1f744689-6cbf-4773-bb52-8912257dfcda-kube-api-access-6vz9x\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059456 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-ca-bundles\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059519 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-proxy-ca-bundles\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059560 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-root\") pod \"1f744689-6cbf-4773-bb52-8912257dfcda\" (UID: \"1f744689-6cbf-4773-bb52-8912257dfcda\") " Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059834 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059858 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1f744689-6cbf-4773-bb52-8912257dfcda-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.059943 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.060256 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.060527 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.060825 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.061317 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.066923 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f744689-6cbf-4773-bb52-8912257dfcda-kube-api-access-6vz9x" (OuterVolumeSpecName: "kube-api-access-6vz9x") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "kube-api-access-6vz9x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.067441 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.067473 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.099756 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.161005 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.161066 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.161082 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.161097 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/1f744689-6cbf-4773-bb52-8912257dfcda-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.161112 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.161122 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.161133 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6vz9x\" (UniqueName: \"kubernetes.io/projected/1f744689-6cbf-4773-bb52-8912257dfcda-kube-api-access-6vz9x\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.161144 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.161152 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f744689-6cbf-4773-bb52-8912257dfcda-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.187451 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1f744689-6cbf-4773-bb52-8912257dfcda" (UID: "1f744689-6cbf-4773-bb52-8912257dfcda"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.262963 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1f744689-6cbf-4773-bb52-8912257dfcda-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.333355 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_1f744689-6cbf-4773-bb52-8912257dfcda/docker-build/0.log" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.333870 5108 generic.go:358] "Generic (PLEG): container finished" podID="1f744689-6cbf-4773-bb52-8912257dfcda" containerID="58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365" exitCode=1 Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.334013 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1f744689-6cbf-4773-bb52-8912257dfcda","Type":"ContainerDied","Data":"58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365"} Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.334072 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.334115 5108 scope.go:117] "RemoveContainer" containerID="58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.334098 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1f744689-6cbf-4773-bb52-8912257dfcda","Type":"ContainerDied","Data":"9f84672bf163adbd22dc6e28d025d57887a1d6f145fec739ebc459226d99d792"} Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.360844 5108 scope.go:117] "RemoveContainer" containerID="658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.380925 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.387005 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.453687 5108 scope.go:117] "RemoveContainer" containerID="58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365" Jan 04 00:29:45 crc kubenswrapper[5108]: E0104 00:29:45.455304 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365\": container with ID starting with 58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365 not found: ID does not exist" containerID="58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.455380 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365"} err="failed to get container status \"58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365\": rpc error: code = NotFound desc = could not find container \"58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365\": container with ID starting with 58d8d77c0e061e60c06e7db12ccbc9b544f1696283f6007fef4ed96835e4e365 not found: ID does not exist" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.455422 5108 scope.go:117] "RemoveContainer" containerID="658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe" Jan 04 00:29:45 crc kubenswrapper[5108]: E0104 00:29:45.455904 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe\": container with ID starting with 658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe not found: ID does not exist" containerID="658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe" Jan 04 00:29:45 crc kubenswrapper[5108]: I0104 00:29:45.455959 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe"} err="failed to get container status \"658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe\": rpc error: code = NotFound desc = could not find container \"658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe\": container with ID starting with 658b8c1e6c09b3605f363806a47a4e8ab93731c168c1bdfffb110055ea43b2fe not found: ID does not exist" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.118263 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.120077 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f744689-6cbf-4773-bb52-8912257dfcda" containerName="docker-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.120104 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f744689-6cbf-4773-bb52-8912257dfcda" containerName="docker-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.120178 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f744689-6cbf-4773-bb52-8912257dfcda" containerName="manage-dockerfile" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.120192 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f744689-6cbf-4773-bb52-8912257dfcda" containerName="manage-dockerfile" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.120345 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f744689-6cbf-4773-bb52-8912257dfcda" containerName="docker-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.263089 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.263375 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.265799 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.266059 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.270560 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.270834 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xhpgk\"" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383183 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-system-configs\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383248 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383282 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-buildworkdir\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383303 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-run\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383356 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383382 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-pull\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383398 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-buildcachedir\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383449 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383483 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-root\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383535 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-push\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383563 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.383634 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggs62\" (UniqueName: \"kubernetes.io/projected/28482f50-73b6-4e18-a992-c4787ef60eb1-kube-api-access-ggs62\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.462959 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f744689-6cbf-4773-bb52-8912257dfcda" path="/var/lib/kubelet/pods/1f744689-6cbf-4773-bb52-8912257dfcda/volumes" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485673 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-system-configs\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485725 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485753 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-buildworkdir\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485773 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-run\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485800 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485830 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-pull\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485853 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-buildcachedir\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485878 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485908 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-root\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.485942 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-push\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.486009 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.486065 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ggs62\" (UniqueName: \"kubernetes.io/projected/28482f50-73b6-4e18-a992-c4787ef60eb1-kube-api-access-ggs62\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.486801 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-buildcachedir\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.486946 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-root\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.487026 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.487126 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-buildworkdir\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.488094 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-run\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.488247 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.488593 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.489109 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.489120 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-system-configs\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.500099 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-pull\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.500173 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-push\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.507881 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggs62\" (UniqueName: \"kubernetes.io/projected/28482f50-73b6-4e18-a992-c4787ef60eb1-kube-api-access-ggs62\") pod \"sg-core-2-build\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.593732 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 04 00:29:46 crc kubenswrapper[5108]: I0104 00:29:46.869873 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 04 00:29:47 crc kubenswrapper[5108]: I0104 00:29:47.352312 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"28482f50-73b6-4e18-a992-c4787ef60eb1","Type":"ContainerStarted","Data":"1af22c7e2f49645f655d26cd3ff4203b684455a8ad56cf196bb037bcd8fb12ab"} Jan 04 00:29:47 crc kubenswrapper[5108]: I0104 00:29:47.352814 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"28482f50-73b6-4e18-a992-c4787ef60eb1","Type":"ContainerStarted","Data":"8415c312056b0f2bd859553e3b98185240a15448c6aec70eae6d1abdd8f78e56"} Jan 04 00:29:48 crc kubenswrapper[5108]: I0104 00:29:48.361872 5108 generic.go:358] "Generic (PLEG): container finished" podID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerID="1af22c7e2f49645f655d26cd3ff4203b684455a8ad56cf196bb037bcd8fb12ab" exitCode=0 Jan 04 00:29:48 crc kubenswrapper[5108]: I0104 00:29:48.362269 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"28482f50-73b6-4e18-a992-c4787ef60eb1","Type":"ContainerDied","Data":"1af22c7e2f49645f655d26cd3ff4203b684455a8ad56cf196bb037bcd8fb12ab"} Jan 04 00:29:49 crc kubenswrapper[5108]: I0104 00:29:49.399404 5108 generic.go:358] "Generic (PLEG): container finished" podID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerID="4d74528f2e5ba560a4876c40da4fa862b97ff7e8dabe1831d046f8f8bfb3bf63" exitCode=0 Jan 04 00:29:49 crc kubenswrapper[5108]: I0104 00:29:49.399578 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"28482f50-73b6-4e18-a992-c4787ef60eb1","Type":"ContainerDied","Data":"4d74528f2e5ba560a4876c40da4fa862b97ff7e8dabe1831d046f8f8bfb3bf63"} Jan 04 00:29:49 crc kubenswrapper[5108]: I0104 00:29:49.440635 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_28482f50-73b6-4e18-a992-c4787ef60eb1/manage-dockerfile/0.log" Jan 04 00:29:51 crc kubenswrapper[5108]: I0104 00:29:51.420762 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"28482f50-73b6-4e18-a992-c4787ef60eb1","Type":"ContainerStarted","Data":"f0afbbc568319e146d26b3985a64cc99ebf4a4c14d92feadd14b07cc435f388c"} Jan 04 00:29:51 crc kubenswrapper[5108]: I0104 00:29:51.458836 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.458810058 podStartE2EDuration="5.458810058s" podCreationTimestamp="2026-01-04 00:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:29:51.456961367 +0000 UTC m=+1165.445526473" watchObservedRunningTime="2026-01-04 00:29:51.458810058 +0000 UTC m=+1165.447375144" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.145869 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458110-zcgsn"] Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.175603 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm"] Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.175859 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458110-zcgsn" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.179418 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.183067 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458110-zcgsn"] Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.183123 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm"] Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.183319 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.183314 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.184015 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.185991 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.186227 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.221099 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d167832-cabc-4f25-80a4-fb975d878c2e-config-volume\") pod \"collect-profiles-29458110-zf2mm\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.221225 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d167832-cabc-4f25-80a4-fb975d878c2e-secret-volume\") pod \"collect-profiles-29458110-zf2mm\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.221274 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldkxj\" (UniqueName: \"kubernetes.io/projected/2d167832-cabc-4f25-80a4-fb975d878c2e-kube-api-access-ldkxj\") pod \"collect-profiles-29458110-zf2mm\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.221305 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbph6\" (UniqueName: \"kubernetes.io/projected/d010b6a7-84b0-4f46-9be2-a1c621bdbc11-kube-api-access-qbph6\") pod \"auto-csr-approver-29458110-zcgsn\" (UID: \"d010b6a7-84b0-4f46-9be2-a1c621bdbc11\") " pod="openshift-infra/auto-csr-approver-29458110-zcgsn" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.323075 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qbph6\" (UniqueName: \"kubernetes.io/projected/d010b6a7-84b0-4f46-9be2-a1c621bdbc11-kube-api-access-qbph6\") pod \"auto-csr-approver-29458110-zcgsn\" (UID: \"d010b6a7-84b0-4f46-9be2-a1c621bdbc11\") " pod="openshift-infra/auto-csr-approver-29458110-zcgsn" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.324223 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d167832-cabc-4f25-80a4-fb975d878c2e-config-volume\") pod \"collect-profiles-29458110-zf2mm\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.324400 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d167832-cabc-4f25-80a4-fb975d878c2e-secret-volume\") pod \"collect-profiles-29458110-zf2mm\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.324537 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldkxj\" (UniqueName: \"kubernetes.io/projected/2d167832-cabc-4f25-80a4-fb975d878c2e-kube-api-access-ldkxj\") pod \"collect-profiles-29458110-zf2mm\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.325151 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d167832-cabc-4f25-80a4-fb975d878c2e-config-volume\") pod \"collect-profiles-29458110-zf2mm\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.332349 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d167832-cabc-4f25-80a4-fb975d878c2e-secret-volume\") pod \"collect-profiles-29458110-zf2mm\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.345580 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldkxj\" (UniqueName: \"kubernetes.io/projected/2d167832-cabc-4f25-80a4-fb975d878c2e-kube-api-access-ldkxj\") pod \"collect-profiles-29458110-zf2mm\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.347141 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbph6\" (UniqueName: \"kubernetes.io/projected/d010b6a7-84b0-4f46-9be2-a1c621bdbc11-kube-api-access-qbph6\") pod \"auto-csr-approver-29458110-zcgsn\" (UID: \"d010b6a7-84b0-4f46-9be2-a1c621bdbc11\") " pod="openshift-infra/auto-csr-approver-29458110-zcgsn" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.504754 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458110-zcgsn" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.515364 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.767278 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm"] Jan 04 00:30:00 crc kubenswrapper[5108]: I0104 00:30:00.815580 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458110-zcgsn"] Jan 04 00:30:01 crc kubenswrapper[5108]: I0104 00:30:01.499591 5108 generic.go:358] "Generic (PLEG): container finished" podID="2d167832-cabc-4f25-80a4-fb975d878c2e" containerID="e38d771755e7ec1a52793d239a594b43b7058d087b5cce93a5e0a0f26da42a67" exitCode=0 Jan 04 00:30:01 crc kubenswrapper[5108]: I0104 00:30:01.499707 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" event={"ID":"2d167832-cabc-4f25-80a4-fb975d878c2e","Type":"ContainerDied","Data":"e38d771755e7ec1a52793d239a594b43b7058d087b5cce93a5e0a0f26da42a67"} Jan 04 00:30:01 crc kubenswrapper[5108]: I0104 00:30:01.500216 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" event={"ID":"2d167832-cabc-4f25-80a4-fb975d878c2e","Type":"ContainerStarted","Data":"73438a1b259df0e17cec7373b11be7dd1f13f42d9738f604d4319470fc12014c"} Jan 04 00:30:01 crc kubenswrapper[5108]: I0104 00:30:01.501512 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458110-zcgsn" event={"ID":"d010b6a7-84b0-4f46-9be2-a1c621bdbc11","Type":"ContainerStarted","Data":"d3e5ada0e36ea903d92e561fa9bb8b2665e853d8174f52c5461ce7bfc6abede4"} Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.511179 5108 generic.go:358] "Generic (PLEG): container finished" podID="d010b6a7-84b0-4f46-9be2-a1c621bdbc11" containerID="b5c58b4a6349954c323a68194cebb5516510116ad0b63767146a36e3dce7f6b0" exitCode=0 Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.511390 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458110-zcgsn" event={"ID":"d010b6a7-84b0-4f46-9be2-a1c621bdbc11","Type":"ContainerDied","Data":"b5c58b4a6349954c323a68194cebb5516510116ad0b63767146a36e3dce7f6b0"} Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.765270 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.868250 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d167832-cabc-4f25-80a4-fb975d878c2e-config-volume\") pod \"2d167832-cabc-4f25-80a4-fb975d878c2e\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.868522 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d167832-cabc-4f25-80a4-fb975d878c2e-secret-volume\") pod \"2d167832-cabc-4f25-80a4-fb975d878c2e\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.868571 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldkxj\" (UniqueName: \"kubernetes.io/projected/2d167832-cabc-4f25-80a4-fb975d878c2e-kube-api-access-ldkxj\") pod \"2d167832-cabc-4f25-80a4-fb975d878c2e\" (UID: \"2d167832-cabc-4f25-80a4-fb975d878c2e\") " Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.869755 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d167832-cabc-4f25-80a4-fb975d878c2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d167832-cabc-4f25-80a4-fb975d878c2e" (UID: "2d167832-cabc-4f25-80a4-fb975d878c2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.877603 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d167832-cabc-4f25-80a4-fb975d878c2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d167832-cabc-4f25-80a4-fb975d878c2e" (UID: "2d167832-cabc-4f25-80a4-fb975d878c2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.878577 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d167832-cabc-4f25-80a4-fb975d878c2e-kube-api-access-ldkxj" (OuterVolumeSpecName: "kube-api-access-ldkxj") pod "2d167832-cabc-4f25-80a4-fb975d878c2e" (UID: "2d167832-cabc-4f25-80a4-fb975d878c2e"). InnerVolumeSpecName "kube-api-access-ldkxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.970267 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d167832-cabc-4f25-80a4-fb975d878c2e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.970322 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ldkxj\" (UniqueName: \"kubernetes.io/projected/2d167832-cabc-4f25-80a4-fb975d878c2e-kube-api-access-ldkxj\") on node \"crc\" DevicePath \"\"" Jan 04 00:30:02 crc kubenswrapper[5108]: I0104 00:30:02.970333 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d167832-cabc-4f25-80a4-fb975d878c2e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 04 00:30:03 crc kubenswrapper[5108]: I0104 00:30:03.521406 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" Jan 04 00:30:03 crc kubenswrapper[5108]: I0104 00:30:03.521384 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458110-zf2mm" event={"ID":"2d167832-cabc-4f25-80a4-fb975d878c2e","Type":"ContainerDied","Data":"73438a1b259df0e17cec7373b11be7dd1f13f42d9738f604d4319470fc12014c"} Jan 04 00:30:03 crc kubenswrapper[5108]: I0104 00:30:03.521469 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73438a1b259df0e17cec7373b11be7dd1f13f42d9738f604d4319470fc12014c" Jan 04 00:30:03 crc kubenswrapper[5108]: I0104 00:30:03.793143 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458110-zcgsn" Jan 04 00:30:03 crc kubenswrapper[5108]: I0104 00:30:03.886351 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbph6\" (UniqueName: \"kubernetes.io/projected/d010b6a7-84b0-4f46-9be2-a1c621bdbc11-kube-api-access-qbph6\") pod \"d010b6a7-84b0-4f46-9be2-a1c621bdbc11\" (UID: \"d010b6a7-84b0-4f46-9be2-a1c621bdbc11\") " Jan 04 00:30:03 crc kubenswrapper[5108]: I0104 00:30:03.895688 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d010b6a7-84b0-4f46-9be2-a1c621bdbc11-kube-api-access-qbph6" (OuterVolumeSpecName: "kube-api-access-qbph6") pod "d010b6a7-84b0-4f46-9be2-a1c621bdbc11" (UID: "d010b6a7-84b0-4f46-9be2-a1c621bdbc11"). InnerVolumeSpecName "kube-api-access-qbph6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:30:03 crc kubenswrapper[5108]: I0104 00:30:03.996744 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qbph6\" (UniqueName: \"kubernetes.io/projected/d010b6a7-84b0-4f46-9be2-a1c621bdbc11-kube-api-access-qbph6\") on node \"crc\" DevicePath \"\"" Jan 04 00:30:04 crc kubenswrapper[5108]: I0104 00:30:04.531915 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458110-zcgsn" Jan 04 00:30:04 crc kubenswrapper[5108]: I0104 00:30:04.531998 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458110-zcgsn" event={"ID":"d010b6a7-84b0-4f46-9be2-a1c621bdbc11","Type":"ContainerDied","Data":"d3e5ada0e36ea903d92e561fa9bb8b2665e853d8174f52c5461ce7bfc6abede4"} Jan 04 00:30:04 crc kubenswrapper[5108]: I0104 00:30:04.532048 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3e5ada0e36ea903d92e561fa9bb8b2665e853d8174f52c5461ce7bfc6abede4" Jan 04 00:30:04 crc kubenswrapper[5108]: I0104 00:30:04.855784 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458104-rknh4"] Jan 04 00:30:04 crc kubenswrapper[5108]: I0104 00:30:04.861909 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458104-rknh4"] Jan 04 00:30:06 crc kubenswrapper[5108]: I0104 00:30:06.470918 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a194bab-cf96-4d6b-b9f6-60bdc5c57621" path="/var/lib/kubelet/pods/2a194bab-cf96-4d6b-b9f6-60bdc5c57621/volumes" Jan 04 00:30:26 crc kubenswrapper[5108]: I0104 00:30:26.929276 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:30:26 crc kubenswrapper[5108]: I0104 00:30:26.929293 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:30:26 crc kubenswrapper[5108]: I0104 00:30:26.942230 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:30:26 crc kubenswrapper[5108]: I0104 00:30:26.942745 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:30:30 crc kubenswrapper[5108]: I0104 00:30:30.521295 5108 scope.go:117] "RemoveContainer" containerID="91fab3741e0e41eb9ff0379c59b7dbdf4fbd5f18e24da388b85f838afe832e92" Jan 04 00:31:54 crc kubenswrapper[5108]: I0104 00:31:54.917186 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:31:54 crc kubenswrapper[5108]: I0104 00:31:54.918260 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:32:00 crc kubenswrapper[5108]: I0104 00:32:00.148451 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458112-lfqrb"] Jan 04 00:32:00 crc kubenswrapper[5108]: I0104 00:32:00.151018 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d010b6a7-84b0-4f46-9be2-a1c621bdbc11" containerName="oc" Jan 04 00:32:00 crc kubenswrapper[5108]: I0104 00:32:00.151040 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d010b6a7-84b0-4f46-9be2-a1c621bdbc11" containerName="oc" Jan 04 00:32:00 crc kubenswrapper[5108]: I0104 00:32:00.151061 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d167832-cabc-4f25-80a4-fb975d878c2e" containerName="collect-profiles" Jan 04 00:32:00 crc kubenswrapper[5108]: I0104 00:32:00.151070 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d167832-cabc-4f25-80a4-fb975d878c2e" containerName="collect-profiles" Jan 04 00:32:00 crc kubenswrapper[5108]: I0104 00:32:00.151266 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d010b6a7-84b0-4f46-9be2-a1c621bdbc11" containerName="oc" Jan 04 00:32:00 crc kubenswrapper[5108]: I0104 00:32:00.151286 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d167832-cabc-4f25-80a4-fb975d878c2e" containerName="collect-profiles" Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.038027 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458112-lfqrb" Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.040506 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.042037 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.042097 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.047022 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458112-lfqrb"] Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.190486 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wchgv\" (UniqueName: \"kubernetes.io/projected/076318d3-ef17-4b92-8c2f-1c9c9ce86c2d-kube-api-access-wchgv\") pod \"auto-csr-approver-29458112-lfqrb\" (UID: \"076318d3-ef17-4b92-8c2f-1c9c9ce86c2d\") " pod="openshift-infra/auto-csr-approver-29458112-lfqrb" Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.292900 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wchgv\" (UniqueName: \"kubernetes.io/projected/076318d3-ef17-4b92-8c2f-1c9c9ce86c2d-kube-api-access-wchgv\") pod \"auto-csr-approver-29458112-lfqrb\" (UID: \"076318d3-ef17-4b92-8c2f-1c9c9ce86c2d\") " pod="openshift-infra/auto-csr-approver-29458112-lfqrb" Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.322623 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wchgv\" (UniqueName: \"kubernetes.io/projected/076318d3-ef17-4b92-8c2f-1c9c9ce86c2d-kube-api-access-wchgv\") pod \"auto-csr-approver-29458112-lfqrb\" (UID: \"076318d3-ef17-4b92-8c2f-1c9c9ce86c2d\") " pod="openshift-infra/auto-csr-approver-29458112-lfqrb" Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.358044 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458112-lfqrb" Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.590258 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458112-lfqrb"] Jan 04 00:32:01 crc kubenswrapper[5108]: I0104 00:32:01.802224 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458112-lfqrb" event={"ID":"076318d3-ef17-4b92-8c2f-1c9c9ce86c2d","Type":"ContainerStarted","Data":"80ed4f4249fdd44d76a47c8061fee50fa17e2825e5f92c08598ff9d59d99619c"} Jan 04 00:32:03 crc kubenswrapper[5108]: I0104 00:32:03.821076 5108 generic.go:358] "Generic (PLEG): container finished" podID="076318d3-ef17-4b92-8c2f-1c9c9ce86c2d" containerID="5fbb4cd4295b47cf480ed517fcd2bb0882857df4fe79ab6028bc31da8dd9d724" exitCode=0 Jan 04 00:32:03 crc kubenswrapper[5108]: I0104 00:32:03.822303 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458112-lfqrb" event={"ID":"076318d3-ef17-4b92-8c2f-1c9c9ce86c2d","Type":"ContainerDied","Data":"5fbb4cd4295b47cf480ed517fcd2bb0882857df4fe79ab6028bc31da8dd9d724"} Jan 04 00:32:05 crc kubenswrapper[5108]: I0104 00:32:05.099598 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458112-lfqrb" Jan 04 00:32:05 crc kubenswrapper[5108]: I0104 00:32:05.257864 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wchgv\" (UniqueName: \"kubernetes.io/projected/076318d3-ef17-4b92-8c2f-1c9c9ce86c2d-kube-api-access-wchgv\") pod \"076318d3-ef17-4b92-8c2f-1c9c9ce86c2d\" (UID: \"076318d3-ef17-4b92-8c2f-1c9c9ce86c2d\") " Jan 04 00:32:05 crc kubenswrapper[5108]: I0104 00:32:05.267570 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/076318d3-ef17-4b92-8c2f-1c9c9ce86c2d-kube-api-access-wchgv" (OuterVolumeSpecName: "kube-api-access-wchgv") pod "076318d3-ef17-4b92-8c2f-1c9c9ce86c2d" (UID: "076318d3-ef17-4b92-8c2f-1c9c9ce86c2d"). InnerVolumeSpecName "kube-api-access-wchgv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:32:05 crc kubenswrapper[5108]: I0104 00:32:05.359797 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wchgv\" (UniqueName: \"kubernetes.io/projected/076318d3-ef17-4b92-8c2f-1c9c9ce86c2d-kube-api-access-wchgv\") on node \"crc\" DevicePath \"\"" Jan 04 00:32:05 crc kubenswrapper[5108]: I0104 00:32:05.846468 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458112-lfqrb" event={"ID":"076318d3-ef17-4b92-8c2f-1c9c9ce86c2d","Type":"ContainerDied","Data":"80ed4f4249fdd44d76a47c8061fee50fa17e2825e5f92c08598ff9d59d99619c"} Jan 04 00:32:05 crc kubenswrapper[5108]: I0104 00:32:05.846932 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ed4f4249fdd44d76a47c8061fee50fa17e2825e5f92c08598ff9d59d99619c" Jan 04 00:32:05 crc kubenswrapper[5108]: I0104 00:32:05.846552 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458112-lfqrb" Jan 04 00:32:06 crc kubenswrapper[5108]: I0104 00:32:06.185346 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458106-mttmg"] Jan 04 00:32:06 crc kubenswrapper[5108]: I0104 00:32:06.195109 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458106-mttmg"] Jan 04 00:32:06 crc kubenswrapper[5108]: I0104 00:32:06.461653 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea28c0a-3424-4943-adfa-182583f45b2b" path="/var/lib/kubelet/pods/dea28c0a-3424-4943-adfa-182583f45b2b/volumes" Jan 04 00:32:24 crc kubenswrapper[5108]: I0104 00:32:24.918181 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:32:24 crc kubenswrapper[5108]: I0104 00:32:24.919281 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:32:30 crc kubenswrapper[5108]: I0104 00:32:30.694690 5108 scope.go:117] "RemoveContainer" containerID="35ddd83f39d73de7f5efce4a5a158390e11f945ba1065af1dd0a58cb6d71a35f" Jan 04 00:32:48 crc kubenswrapper[5108]: E0104 00:32:48.826098 5108 kubelet.go:2642] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.377s" Jan 04 00:32:54 crc kubenswrapper[5108]: I0104 00:32:54.917933 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:32:54 crc kubenswrapper[5108]: I0104 00:32:54.919100 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:32:54 crc kubenswrapper[5108]: I0104 00:32:54.919171 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:32:54 crc kubenswrapper[5108]: I0104 00:32:54.920014 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bad0ea277fd94911974fbd9c4fb75c82a3196517d30a4e258eccd8f7cc79a379"} pod="openshift-machine-config-operator/machine-config-daemon-njl5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 04 00:32:54 crc kubenswrapper[5108]: I0104 00:32:54.920069 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" containerID="cri-o://bad0ea277fd94911974fbd9c4fb75c82a3196517d30a4e258eccd8f7cc79a379" gracePeriod=600 Jan 04 00:32:56 crc kubenswrapper[5108]: I0104 00:32:56.265641 5108 generic.go:358] "Generic (PLEG): container finished" podID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerID="bad0ea277fd94911974fbd9c4fb75c82a3196517d30a4e258eccd8f7cc79a379" exitCode=0 Jan 04 00:32:56 crc kubenswrapper[5108]: I0104 00:32:56.265739 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerDied","Data":"bad0ea277fd94911974fbd9c4fb75c82a3196517d30a4e258eccd8f7cc79a379"} Jan 04 00:32:56 crc kubenswrapper[5108]: I0104 00:32:56.265834 5108 scope.go:117] "RemoveContainer" containerID="d315c271b5ebb5ccd4137805a4c0a0f8051b40ee81c1c5c36d5b609914f2eb07" Jan 04 00:32:57 crc kubenswrapper[5108]: I0104 00:32:57.557102 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 04 00:32:59 crc kubenswrapper[5108]: I0104 00:32:59.295806 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"71e1a23e6a33296265d8312485d92dabf3435cdf7d47549db16b40e0523240ea"} Jan 04 00:33:28 crc kubenswrapper[5108]: I0104 00:33:28.517849 5108 generic.go:358] "Generic (PLEG): container finished" podID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerID="f0afbbc568319e146d26b3985a64cc99ebf4a4c14d92feadd14b07cc435f388c" exitCode=0 Jan 04 00:33:28 crc kubenswrapper[5108]: I0104 00:33:28.518655 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"28482f50-73b6-4e18-a992-c4787ef60eb1","Type":"ContainerDied","Data":"f0afbbc568319e146d26b3985a64cc99ebf4a4c14d92feadd14b07cc435f388c"} Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.783702 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.915479 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-proxy-ca-bundles\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.915565 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-buildcachedir\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.915599 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-system-configs\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.915615 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-ca-bundles\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.915646 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-run\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.915699 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.916120 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-push\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.916238 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggs62\" (UniqueName: \"kubernetes.io/projected/28482f50-73b6-4e18-a992-c4787ef60eb1-kube-api-access-ggs62\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.916321 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-pull\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.916486 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-node-pullsecrets\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.916549 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-build-blob-cache\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.916639 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-buildworkdir\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.916669 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-root\") pod \"28482f50-73b6-4e18-a992-c4787ef60eb1\" (UID: \"28482f50-73b6-4e18-a992-c4787ef60eb1\") " Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.916708 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.917320 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.917350 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28482f50-73b6-4e18-a992-c4787ef60eb1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.917316 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.917345 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.918357 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.924441 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28482f50-73b6-4e18-a992-c4787ef60eb1-kube-api-access-ggs62" (OuterVolumeSpecName: "kube-api-access-ggs62") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "kube-api-access-ggs62". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.924541 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.924923 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.935083 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:33:29 crc kubenswrapper[5108]: I0104 00:33:29.937235 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.018598 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.019385 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.019402 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.019415 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28482f50-73b6-4e18-a992-c4787ef60eb1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.019428 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.019442 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.019454 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ggs62\" (UniqueName: \"kubernetes.io/projected/28482f50-73b6-4e18-a992-c4787ef60eb1-kube-api-access-ggs62\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.019466 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/28482f50-73b6-4e18-a992-c4787ef60eb1-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.320244 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.323421 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.541975 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.541967 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"28482f50-73b6-4e18-a992-c4787ef60eb1","Type":"ContainerDied","Data":"8415c312056b0f2bd859553e3b98185240a15448c6aec70eae6d1abdd8f78e56"} Jan 04 00:33:30 crc kubenswrapper[5108]: I0104 00:33:30.542388 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8415c312056b0f2bd859553e3b98185240a15448c6aec70eae6d1abdd8f78e56" Jan 04 00:33:32 crc kubenswrapper[5108]: I0104 00:33:32.575057 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "28482f50-73b6-4e18-a992-c4787ef60eb1" (UID: "28482f50-73b6-4e18-a992-c4787ef60eb1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:32 crc kubenswrapper[5108]: I0104 00:33:32.685573 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/28482f50-73b6-4e18-a992-c4787ef60eb1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.376546 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378135 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerName="manage-dockerfile" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378157 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerName="manage-dockerfile" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378189 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerName="git-clone" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378265 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerName="git-clone" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378284 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="076318d3-ef17-4b92-8c2f-1c9c9ce86c2d" containerName="oc" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378294 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="076318d3-ef17-4b92-8c2f-1c9c9ce86c2d" containerName="oc" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378316 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerName="docker-build" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378324 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerName="docker-build" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378469 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="076318d3-ef17-4b92-8c2f-1c9c9ce86c2d" containerName="oc" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.378483 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="28482f50-73b6-4e18-a992-c4787ef60eb1" containerName="docker-build" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.874791 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.875135 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.877696 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.878046 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xhpgk\"" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.879150 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Jan 04 00:33:34 crc kubenswrapper[5108]: I0104 00:33:34.884154 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.017177 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-push\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.017612 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.017787 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.017875 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjk6\" (UniqueName: \"kubernetes.io/projected/dcabedff-fce3-485f-9a18-b86342c79e04-kube-api-access-4mjk6\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.017973 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.018092 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.018220 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.018312 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-pull\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.018982 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.019478 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.019861 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.020167 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.122695 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.122781 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.122860 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.122893 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-pull\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.122912 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.122949 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.122979 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123016 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123058 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-push\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123096 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123116 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123137 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mjk6\" (UniqueName: \"kubernetes.io/projected/dcabedff-fce3-485f-9a18-b86342c79e04-kube-api-access-4mjk6\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123181 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123443 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123633 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123813 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.123962 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.124607 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.124735 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.124766 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.125524 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.131957 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-push\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.132492 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-pull\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.143515 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mjk6\" (UniqueName: \"kubernetes.io/projected/dcabedff-fce3-485f-9a18-b86342c79e04-kube-api-access-4mjk6\") pod \"sg-bridge-1-build\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.207801 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.451106 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 04 00:33:35 crc kubenswrapper[5108]: I0104 00:33:35.592266 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"dcabedff-fce3-485f-9a18-b86342c79e04","Type":"ContainerStarted","Data":"5b983e24c342b633fabe375079b4a822027ce225fd403fc381917f04a890df89"} Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.151538 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f7hjx"] Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.157756 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.170963 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f7hjx"] Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.246908 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrbn6\" (UniqueName: \"kubernetes.io/projected/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-kube-api-access-nrbn6\") pod \"certified-operators-f7hjx\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.247175 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-utilities\") pod \"certified-operators-f7hjx\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.247282 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-catalog-content\") pod \"certified-operators-f7hjx\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.348039 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-utilities\") pod \"certified-operators-f7hjx\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.348094 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-catalog-content\") pod \"certified-operators-f7hjx\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.348153 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrbn6\" (UniqueName: \"kubernetes.io/projected/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-kube-api-access-nrbn6\") pod \"certified-operators-f7hjx\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.348659 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-utilities\") pod \"certified-operators-f7hjx\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.348700 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-catalog-content\") pod \"certified-operators-f7hjx\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.371589 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrbn6\" (UniqueName: \"kubernetes.io/projected/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-kube-api-access-nrbn6\") pod \"certified-operators-f7hjx\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.484680 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.606614 5108 generic.go:358] "Generic (PLEG): container finished" podID="dcabedff-fce3-485f-9a18-b86342c79e04" containerID="d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807" exitCode=0 Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.606683 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"dcabedff-fce3-485f-9a18-b86342c79e04","Type":"ContainerDied","Data":"d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807"} Jan 04 00:33:36 crc kubenswrapper[5108]: I0104 00:33:36.726471 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f7hjx"] Jan 04 00:33:36 crc kubenswrapper[5108]: W0104 00:33:36.749067 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod951d964c_d0d5_4241_a4ee_f5ec8c8e24f3.slice/crio-a2df4e688492d4839130adaa69d640b0378b8de375f66bd7d44a9fb62e5fe7a5 WatchSource:0}: Error finding container a2df4e688492d4839130adaa69d640b0378b8de375f66bd7d44a9fb62e5fe7a5: Status 404 returned error can't find the container with id a2df4e688492d4839130adaa69d640b0378b8de375f66bd7d44a9fb62e5fe7a5 Jan 04 00:33:37 crc kubenswrapper[5108]: I0104 00:33:37.639042 5108 generic.go:358] "Generic (PLEG): container finished" podID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerID="8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362" exitCode=0 Jan 04 00:33:37 crc kubenswrapper[5108]: I0104 00:33:37.639766 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7hjx" event={"ID":"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3","Type":"ContainerDied","Data":"8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362"} Jan 04 00:33:37 crc kubenswrapper[5108]: I0104 00:33:37.639904 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7hjx" event={"ID":"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3","Type":"ContainerStarted","Data":"a2df4e688492d4839130adaa69d640b0378b8de375f66bd7d44a9fb62e5fe7a5"} Jan 04 00:33:37 crc kubenswrapper[5108]: I0104 00:33:37.648913 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"dcabedff-fce3-485f-9a18-b86342c79e04","Type":"ContainerStarted","Data":"fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437"} Jan 04 00:33:37 crc kubenswrapper[5108]: I0104 00:33:37.711044 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.711014486 podStartE2EDuration="3.711014486s" podCreationTimestamp="2026-01-04 00:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:33:37.699593082 +0000 UTC m=+1391.688158168" watchObservedRunningTime="2026-01-04 00:33:37.711014486 +0000 UTC m=+1391.699579612" Jan 04 00:33:39 crc kubenswrapper[5108]: I0104 00:33:39.666364 5108 generic.go:358] "Generic (PLEG): container finished" podID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerID="35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18" exitCode=0 Jan 04 00:33:39 crc kubenswrapper[5108]: I0104 00:33:39.666504 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7hjx" event={"ID":"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3","Type":"ContainerDied","Data":"35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18"} Jan 04 00:33:40 crc kubenswrapper[5108]: I0104 00:33:40.677794 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7hjx" event={"ID":"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3","Type":"ContainerStarted","Data":"e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703"} Jan 04 00:33:44 crc kubenswrapper[5108]: I0104 00:33:44.537704 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f7hjx" podStartSLOduration=7.219910989 podStartE2EDuration="8.537671561s" podCreationTimestamp="2026-01-04 00:33:36 +0000 UTC" firstStartedPulling="2026-01-04 00:33:37.642953231 +0000 UTC m=+1391.631518327" lastFinishedPulling="2026-01-04 00:33:38.960713813 +0000 UTC m=+1392.949278899" observedRunningTime="2026-01-04 00:33:40.700404855 +0000 UTC m=+1394.688969941" watchObservedRunningTime="2026-01-04 00:33:44.537671561 +0000 UTC m=+1398.526236657" Jan 04 00:33:44 crc kubenswrapper[5108]: I0104 00:33:44.544661 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 04 00:33:44 crc kubenswrapper[5108]: I0104 00:33:44.545055 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="dcabedff-fce3-485f-9a18-b86342c79e04" containerName="docker-build" containerID="cri-o://fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437" gracePeriod=30 Jan 04 00:33:46 crc kubenswrapper[5108]: I0104 00:33:46.181022 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.247136 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.247690 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.247462 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.253645 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.253979 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.254065 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.258070 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.258166 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.311433 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.334394 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.334459 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.334629 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xmgp\" (UniqueName: \"kubernetes.io/projected/beab3683-44e4-49e8-998d-003a814539a2-kube-api-access-9xmgp\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.334750 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.334895 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.335026 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.335057 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.335108 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.335153 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-pull\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.335324 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.335431 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.335475 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-push\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.364190 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f7hjx"] Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.439111 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.439283 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-pull\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.439332 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.439365 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.439707 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-push\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.439847 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.439959 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.440030 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xmgp\" (UniqueName: \"kubernetes.io/projected/beab3683-44e4-49e8-998d-003a814539a2-kube-api-access-9xmgp\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.440064 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.440101 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.440062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.440062 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.440232 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.440263 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.440429 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.440456 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.441036 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.441072 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.441074 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.441147 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.441183 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.457094 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-pull\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.462813 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xmgp\" (UniqueName: \"kubernetes.io/projected/beab3683-44e4-49e8-998d-003a814539a2-kube-api-access-9xmgp\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.463329 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-push\") pod \"sg-bridge-2-build\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.600618 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_dcabedff-fce3-485f-9a18-b86342c79e04/docker-build/0.log" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.601657 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.664241 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.744019 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_dcabedff-fce3-485f-9a18-b86342c79e04/docker-build/0.log" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.745478 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mjk6\" (UniqueName: \"kubernetes.io/projected/dcabedff-fce3-485f-9a18-b86342c79e04-kube-api-access-4mjk6\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.745623 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-system-configs\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.745652 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-node-pullsecrets\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.745724 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-root\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.745783 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-buildworkdir\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.745840 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-pull\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.745904 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.745889 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-push\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.746150 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-proxy-ca-bundles\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.746186 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-buildcachedir\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.746238 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-build-blob-cache\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.746316 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-ca-bundles\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.746424 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-run\") pod \"dcabedff-fce3-485f-9a18-b86342c79e04\" (UID: \"dcabedff-fce3-485f-9a18-b86342c79e04\") " Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.746549 5108 generic.go:358] "Generic (PLEG): container finished" podID="dcabedff-fce3-485f-9a18-b86342c79e04" containerID="fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437" exitCode=1 Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.746775 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.746805 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.747025 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.747289 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.747436 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"dcabedff-fce3-485f-9a18-b86342c79e04","Type":"ContainerDied","Data":"fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437"} Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.747575 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.748158 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.747694 5108 scope.go:117] "RemoveContainer" containerID="fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.747589 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"dcabedff-fce3-485f-9a18-b86342c79e04","Type":"ContainerDied","Data":"5b983e24c342b633fabe375079b4a822027ce225fd403fc381917f04a890df89"} Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.751233 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcabedff-fce3-485f-9a18-b86342c79e04-kube-api-access-4mjk6" (OuterVolumeSpecName: "kube-api-access-4mjk6") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "kube-api-access-4mjk6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.752133 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.751772 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.754144 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.756743 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.761039 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.821146 5108 scope.go:117] "RemoveContainer" containerID="d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.840249 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dcabedff-fce3-485f-9a18-b86342c79e04" (UID: "dcabedff-fce3-485f-9a18-b86342c79e04"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848431 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848459 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4mjk6\" (UniqueName: \"kubernetes.io/projected/dcabedff-fce3-485f-9a18-b86342c79e04-kube-api-access-4mjk6\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848471 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848481 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848499 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848509 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848520 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/dcabedff-fce3-485f-9a18-b86342c79e04-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848531 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848539 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dcabedff-fce3-485f-9a18-b86342c79e04-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848548 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dcabedff-fce3-485f-9a18-b86342c79e04-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.848557 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dcabedff-fce3-485f-9a18-b86342c79e04-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.923451 5108 scope.go:117] "RemoveContainer" containerID="fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437" Jan 04 00:33:47 crc kubenswrapper[5108]: E0104 00:33:47.924001 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437\": container with ID starting with fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437 not found: ID does not exist" containerID="fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.924065 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437"} err="failed to get container status \"fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437\": rpc error: code = NotFound desc = could not find container \"fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437\": container with ID starting with fa4ba7355347dd0052552f85458ec3674533a10333b23fcc5a33f83142f64437 not found: ID does not exist" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.924134 5108 scope.go:117] "RemoveContainer" containerID="d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807" Jan 04 00:33:47 crc kubenswrapper[5108]: E0104 00:33:47.924399 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807\": container with ID starting with d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807 not found: ID does not exist" containerID="d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807" Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.924415 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807"} err="failed to get container status \"d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807\": rpc error: code = NotFound desc = could not find container \"d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807\": container with ID starting with d137d79ad8f792fe5d5a5424ed413df0671c83c4b5c0440d3f0265359fb23807 not found: ID does not exist" Jan 04 00:33:47 crc kubenswrapper[5108]: W0104 00:33:47.931607 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeab3683_44e4_49e8_998d_003a814539a2.slice/crio-487102beb0bef73f914f5e162454d163ad448bc6700dd5b4c2213e8427d5698a WatchSource:0}: Error finding container 487102beb0bef73f914f5e162454d163ad448bc6700dd5b4c2213e8427d5698a: Status 404 returned error can't find the container with id 487102beb0bef73f914f5e162454d163ad448bc6700dd5b4c2213e8427d5698a Jan 04 00:33:47 crc kubenswrapper[5108]: I0104 00:33:47.934309 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 04 00:33:48 crc kubenswrapper[5108]: I0104 00:33:48.103045 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 04 00:33:48 crc kubenswrapper[5108]: I0104 00:33:48.107829 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 04 00:33:48 crc kubenswrapper[5108]: I0104 00:33:48.459475 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcabedff-fce3-485f-9a18-b86342c79e04" path="/var/lib/kubelet/pods/dcabedff-fce3-485f-9a18-b86342c79e04/volumes" Jan 04 00:33:48 crc kubenswrapper[5108]: I0104 00:33:48.762241 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"beab3683-44e4-49e8-998d-003a814539a2","Type":"ContainerStarted","Data":"ade6c02aaa1679841739447f69d918d86d8c77187caeebc81e3be1357f6c293d"} Jan 04 00:33:48 crc kubenswrapper[5108]: I0104 00:33:48.762317 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"beab3683-44e4-49e8-998d-003a814539a2","Type":"ContainerStarted","Data":"487102beb0bef73f914f5e162454d163ad448bc6700dd5b4c2213e8427d5698a"} Jan 04 00:33:48 crc kubenswrapper[5108]: I0104 00:33:48.764937 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f7hjx" podUID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerName="registry-server" containerID="cri-o://e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703" gracePeriod=2 Jan 04 00:33:48 crc kubenswrapper[5108]: E0104 00:33:48.834500 5108 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod951d964c_d0d5_4241_a4ee_f5ec8c8e24f3.slice/crio-conmon-e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703.scope\": RecentStats: unable to find data in memory cache]" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.165722 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.278682 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-catalog-content\") pod \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.278788 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrbn6\" (UniqueName: \"kubernetes.io/projected/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-kube-api-access-nrbn6\") pod \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.279110 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-utilities\") pod \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\" (UID: \"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3\") " Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.280520 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-utilities" (OuterVolumeSpecName: "utilities") pod "951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" (UID: "951d964c-d0d5-4241-a4ee-f5ec8c8e24f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.287511 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-kube-api-access-nrbn6" (OuterVolumeSpecName: "kube-api-access-nrbn6") pod "951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" (UID: "951d964c-d0d5-4241-a4ee-f5ec8c8e24f3"). InnerVolumeSpecName "kube-api-access-nrbn6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.316874 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" (UID: "951d964c-d0d5-4241-a4ee-f5ec8c8e24f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.380820 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.380888 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.380911 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nrbn6\" (UniqueName: \"kubernetes.io/projected/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3-kube-api-access-nrbn6\") on node \"crc\" DevicePath \"\"" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.772890 5108 generic.go:358] "Generic (PLEG): container finished" podID="beab3683-44e4-49e8-998d-003a814539a2" containerID="ade6c02aaa1679841739447f69d918d86d8c77187caeebc81e3be1357f6c293d" exitCode=0 Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.773013 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"beab3683-44e4-49e8-998d-003a814539a2","Type":"ContainerDied","Data":"ade6c02aaa1679841739447f69d918d86d8c77187caeebc81e3be1357f6c293d"} Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.775851 5108 generic.go:358] "Generic (PLEG): container finished" podID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerID="e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703" exitCode=0 Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.775922 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f7hjx" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.775969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7hjx" event={"ID":"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3","Type":"ContainerDied","Data":"e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703"} Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.776079 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7hjx" event={"ID":"951d964c-d0d5-4241-a4ee-f5ec8c8e24f3","Type":"ContainerDied","Data":"a2df4e688492d4839130adaa69d640b0378b8de375f66bd7d44a9fb62e5fe7a5"} Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.776108 5108 scope.go:117] "RemoveContainer" containerID="e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.809674 5108 scope.go:117] "RemoveContainer" containerID="35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.828255 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f7hjx"] Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.844066 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f7hjx"] Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.855602 5108 scope.go:117] "RemoveContainer" containerID="8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.905612 5108 scope.go:117] "RemoveContainer" containerID="e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703" Jan 04 00:33:49 crc kubenswrapper[5108]: E0104 00:33:49.906304 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703\": container with ID starting with e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703 not found: ID does not exist" containerID="e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.906355 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703"} err="failed to get container status \"e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703\": rpc error: code = NotFound desc = could not find container \"e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703\": container with ID starting with e381a3da07c74ad99e97e3a0077e15f108a1a985d40b16dc5385a4ae7aefb703 not found: ID does not exist" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.906388 5108 scope.go:117] "RemoveContainer" containerID="35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18" Jan 04 00:33:49 crc kubenswrapper[5108]: E0104 00:33:49.906970 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18\": container with ID starting with 35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18 not found: ID does not exist" containerID="35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.907004 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18"} err="failed to get container status \"35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18\": rpc error: code = NotFound desc = could not find container \"35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18\": container with ID starting with 35a6c77c5cf7991af9f51160caa7a0452f886fb4c536d1e50fc34710e0124d18 not found: ID does not exist" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.907021 5108 scope.go:117] "RemoveContainer" containerID="8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362" Jan 04 00:33:49 crc kubenswrapper[5108]: E0104 00:33:49.907571 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362\": container with ID starting with 8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362 not found: ID does not exist" containerID="8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362" Jan 04 00:33:49 crc kubenswrapper[5108]: I0104 00:33:49.907603 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362"} err="failed to get container status \"8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362\": rpc error: code = NotFound desc = could not find container \"8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362\": container with ID starting with 8d5e621c45138ee2f77863fc1ae5ea7498a1fb5dbf8de0b6192fa084f8ec9362 not found: ID does not exist" Jan 04 00:33:50 crc kubenswrapper[5108]: I0104 00:33:50.463480 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" path="/var/lib/kubelet/pods/951d964c-d0d5-4241-a4ee-f5ec8c8e24f3/volumes" Jan 04 00:33:50 crc kubenswrapper[5108]: I0104 00:33:50.788264 5108 generic.go:358] "Generic (PLEG): container finished" podID="beab3683-44e4-49e8-998d-003a814539a2" containerID="b0e807ea0dd0bf20b866a2929e7a7916626eb9d11491436e26083cebf7bf2a9a" exitCode=0 Jan 04 00:33:50 crc kubenswrapper[5108]: I0104 00:33:50.788382 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"beab3683-44e4-49e8-998d-003a814539a2","Type":"ContainerDied","Data":"b0e807ea0dd0bf20b866a2929e7a7916626eb9d11491436e26083cebf7bf2a9a"} Jan 04 00:33:50 crc kubenswrapper[5108]: I0104 00:33:50.833116 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_beab3683-44e4-49e8-998d-003a814539a2/manage-dockerfile/0.log" Jan 04 00:33:51 crc kubenswrapper[5108]: I0104 00:33:51.808240 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"beab3683-44e4-49e8-998d-003a814539a2","Type":"ContainerStarted","Data":"378061244cb166b575fef170786b4cba77a6ebf33a9f2c2049e6bd21cd7f2b62"} Jan 04 00:33:51 crc kubenswrapper[5108]: I0104 00:33:51.846261 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.846183808 podStartE2EDuration="5.846183808s" podCreationTimestamp="2026-01-04 00:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:33:51.843197536 +0000 UTC m=+1405.831796623" watchObservedRunningTime="2026-01-04 00:33:51.846183808 +0000 UTC m=+1405.834748924" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.140964 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458114-bbwkb"] Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148121 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerName="registry-server" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148283 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerName="registry-server" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148396 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dcabedff-fce3-485f-9a18-b86342c79e04" containerName="docker-build" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148474 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcabedff-fce3-485f-9a18-b86342c79e04" containerName="docker-build" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148544 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dcabedff-fce3-485f-9a18-b86342c79e04" containerName="manage-dockerfile" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148600 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcabedff-fce3-485f-9a18-b86342c79e04" containerName="manage-dockerfile" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148671 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerName="extract-utilities" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148723 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerName="extract-utilities" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148787 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerName="extract-content" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.148839 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerName="extract-content" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.149030 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="951d964c-d0d5-4241-a4ee-f5ec8c8e24f3" containerName="registry-server" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.149098 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="dcabedff-fce3-485f-9a18-b86342c79e04" containerName="docker-build" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.189295 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458114-bbwkb"] Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.189533 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458114-bbwkb" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.194540 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.194808 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.194944 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.258327 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtqrq\" (UniqueName: \"kubernetes.io/projected/d2866724-88f6-46f3-87c3-d8b7af442d87-kube-api-access-jtqrq\") pod \"auto-csr-approver-29458114-bbwkb\" (UID: \"d2866724-88f6-46f3-87c3-d8b7af442d87\") " pod="openshift-infra/auto-csr-approver-29458114-bbwkb" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.360110 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtqrq\" (UniqueName: \"kubernetes.io/projected/d2866724-88f6-46f3-87c3-d8b7af442d87-kube-api-access-jtqrq\") pod \"auto-csr-approver-29458114-bbwkb\" (UID: \"d2866724-88f6-46f3-87c3-d8b7af442d87\") " pod="openshift-infra/auto-csr-approver-29458114-bbwkb" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.393429 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtqrq\" (UniqueName: \"kubernetes.io/projected/d2866724-88f6-46f3-87c3-d8b7af442d87-kube-api-access-jtqrq\") pod \"auto-csr-approver-29458114-bbwkb\" (UID: \"d2866724-88f6-46f3-87c3-d8b7af442d87\") " pod="openshift-infra/auto-csr-approver-29458114-bbwkb" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.509835 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458114-bbwkb" Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.748036 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458114-bbwkb"] Jan 04 00:34:00 crc kubenswrapper[5108]: I0104 00:34:00.888580 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458114-bbwkb" event={"ID":"d2866724-88f6-46f3-87c3-d8b7af442d87","Type":"ContainerStarted","Data":"3ec250f23a525b86318b244e93b1d2029a82620709fef00c83ca5602ad3d29bc"} Jan 04 00:34:02 crc kubenswrapper[5108]: I0104 00:34:02.917241 5108 generic.go:358] "Generic (PLEG): container finished" podID="d2866724-88f6-46f3-87c3-d8b7af442d87" containerID="0fc67ad9731bd2df6910905b5525ba29136a7cd6f9212baeaa39dd25a5a328b9" exitCode=0 Jan 04 00:34:02 crc kubenswrapper[5108]: I0104 00:34:02.917371 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458114-bbwkb" event={"ID":"d2866724-88f6-46f3-87c3-d8b7af442d87","Type":"ContainerDied","Data":"0fc67ad9731bd2df6910905b5525ba29136a7cd6f9212baeaa39dd25a5a328b9"} Jan 04 00:34:04 crc kubenswrapper[5108]: I0104 00:34:04.206356 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458114-bbwkb" Jan 04 00:34:04 crc kubenswrapper[5108]: I0104 00:34:04.339628 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtqrq\" (UniqueName: \"kubernetes.io/projected/d2866724-88f6-46f3-87c3-d8b7af442d87-kube-api-access-jtqrq\") pod \"d2866724-88f6-46f3-87c3-d8b7af442d87\" (UID: \"d2866724-88f6-46f3-87c3-d8b7af442d87\") " Jan 04 00:34:04 crc kubenswrapper[5108]: I0104 00:34:04.376127 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2866724-88f6-46f3-87c3-d8b7af442d87-kube-api-access-jtqrq" (OuterVolumeSpecName: "kube-api-access-jtqrq") pod "d2866724-88f6-46f3-87c3-d8b7af442d87" (UID: "d2866724-88f6-46f3-87c3-d8b7af442d87"). InnerVolumeSpecName "kube-api-access-jtqrq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:34:04 crc kubenswrapper[5108]: I0104 00:34:04.442014 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jtqrq\" (UniqueName: \"kubernetes.io/projected/d2866724-88f6-46f3-87c3-d8b7af442d87-kube-api-access-jtqrq\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:04 crc kubenswrapper[5108]: I0104 00:34:04.936643 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458114-bbwkb" Jan 04 00:34:04 crc kubenswrapper[5108]: I0104 00:34:04.936704 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458114-bbwkb" event={"ID":"d2866724-88f6-46f3-87c3-d8b7af442d87","Type":"ContainerDied","Data":"3ec250f23a525b86318b244e93b1d2029a82620709fef00c83ca5602ad3d29bc"} Jan 04 00:34:04 crc kubenswrapper[5108]: I0104 00:34:04.937057 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec250f23a525b86318b244e93b1d2029a82620709fef00c83ca5602ad3d29bc" Jan 04 00:34:05 crc kubenswrapper[5108]: I0104 00:34:05.282655 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458108-r6gfw"] Jan 04 00:34:05 crc kubenswrapper[5108]: I0104 00:34:05.292066 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458108-r6gfw"] Jan 04 00:34:06 crc kubenswrapper[5108]: I0104 00:34:06.458891 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1" path="/var/lib/kubelet/pods/068d5f06-a4c5-46a7-ac2a-7ea19fce3ed1/volumes" Jan 04 00:34:30 crc kubenswrapper[5108]: I0104 00:34:30.840349 5108 scope.go:117] "RemoveContainer" containerID="ffbf071e4332cfa038f9bb3c89ba0e184332d2b0cb660826aaa9e1ffb2807727" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.038281 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l8f52"] Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.040306 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2866724-88f6-46f3-87c3-d8b7af442d87" containerName="oc" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.040334 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2866724-88f6-46f3-87c3-d8b7af442d87" containerName="oc" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.040539 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="d2866724-88f6-46f3-87c3-d8b7af442d87" containerName="oc" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.553922 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8f52"] Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.554150 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.740099 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-utilities\") pod \"redhat-operators-l8f52\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.740437 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-catalog-content\") pod \"redhat-operators-l8f52\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.740693 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7898\" (UniqueName: \"kubernetes.io/projected/6c7c8e14-16bd-46d2-9014-752b77bb29d1-kube-api-access-z7898\") pod \"redhat-operators-l8f52\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.842056 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-utilities\") pod \"redhat-operators-l8f52\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.842166 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-catalog-content\") pod \"redhat-operators-l8f52\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.842232 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7898\" (UniqueName: \"kubernetes.io/projected/6c7c8e14-16bd-46d2-9014-752b77bb29d1-kube-api-access-z7898\") pod \"redhat-operators-l8f52\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.842905 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-utilities\") pod \"redhat-operators-l8f52\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.842935 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-catalog-content\") pod \"redhat-operators-l8f52\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.868040 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7898\" (UniqueName: \"kubernetes.io/projected/6c7c8e14-16bd-46d2-9014-752b77bb29d1-kube-api-access-z7898\") pod \"redhat-operators-l8f52\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:51 crc kubenswrapper[5108]: I0104 00:34:51.876603 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:34:52 crc kubenswrapper[5108]: I0104 00:34:52.138954 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8f52"] Jan 04 00:34:52 crc kubenswrapper[5108]: I0104 00:34:52.340973 5108 generic.go:358] "Generic (PLEG): container finished" podID="beab3683-44e4-49e8-998d-003a814539a2" containerID="378061244cb166b575fef170786b4cba77a6ebf33a9f2c2049e6bd21cd7f2b62" exitCode=0 Jan 04 00:34:52 crc kubenswrapper[5108]: I0104 00:34:52.341063 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"beab3683-44e4-49e8-998d-003a814539a2","Type":"ContainerDied","Data":"378061244cb166b575fef170786b4cba77a6ebf33a9f2c2049e6bd21cd7f2b62"} Jan 04 00:34:52 crc kubenswrapper[5108]: I0104 00:34:52.343227 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8f52" event={"ID":"6c7c8e14-16bd-46d2-9014-752b77bb29d1","Type":"ContainerStarted","Data":"71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d"} Jan 04 00:34:52 crc kubenswrapper[5108]: I0104 00:34:52.343263 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8f52" event={"ID":"6c7c8e14-16bd-46d2-9014-752b77bb29d1","Type":"ContainerStarted","Data":"03045e649eb732638b7f8dc38fe5d6245aafce36001c5541e82741d7c99ab813"} Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.353157 5108 generic.go:358] "Generic (PLEG): container finished" podID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerID="71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d" exitCode=0 Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.353632 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8f52" event={"ID":"6c7c8e14-16bd-46d2-9014-752b77bb29d1","Type":"ContainerDied","Data":"71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d"} Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.618797 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.782513 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-system-configs\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.782624 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-root\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.782727 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-buildworkdir\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.782802 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-run\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.782855 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-build-blob-cache\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.782883 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-proxy-ca-bundles\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.782929 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xmgp\" (UniqueName: \"kubernetes.io/projected/beab3683-44e4-49e8-998d-003a814539a2-kube-api-access-9xmgp\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.783047 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-ca-bundles\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.783476 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-node-pullsecrets\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.783546 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-pull\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.783589 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-push\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.783624 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-buildcachedir\") pod \"beab3683-44e4-49e8-998d-003a814539a2\" (UID: \"beab3683-44e4-49e8-998d-003a814539a2\") " Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.783950 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.783983 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.783964 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.784075 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.784236 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.784592 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.786258 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.793093 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beab3683-44e4-49e8-998d-003a814539a2-kube-api-access-9xmgp" (OuterVolumeSpecName: "kube-api-access-9xmgp") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "kube-api-access-9xmgp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.793118 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.793465 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886160 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886212 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886222 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xmgp\" (UniqueName: \"kubernetes.io/projected/beab3683-44e4-49e8-998d-003a814539a2-kube-api-access-9xmgp\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886233 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886242 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886252 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886261 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/beab3683-44e4-49e8-998d-003a814539a2-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886271 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/beab3683-44e4-49e8-998d-003a814539a2-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886280 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/beab3683-44e4-49e8-998d-003a814539a2-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.886290 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.939429 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:34:53 crc kubenswrapper[5108]: I0104 00:34:53.988598 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:54 crc kubenswrapper[5108]: I0104 00:34:54.363968 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"beab3683-44e4-49e8-998d-003a814539a2","Type":"ContainerDied","Data":"487102beb0bef73f914f5e162454d163ad448bc6700dd5b4c2213e8427d5698a"} Jan 04 00:34:54 crc kubenswrapper[5108]: I0104 00:34:54.364018 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="487102beb0bef73f914f5e162454d163ad448bc6700dd5b4c2213e8427d5698a" Jan 04 00:34:54 crc kubenswrapper[5108]: I0104 00:34:54.364159 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 04 00:34:54 crc kubenswrapper[5108]: I0104 00:34:54.555295 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "beab3683-44e4-49e8-998d-003a814539a2" (UID: "beab3683-44e4-49e8-998d-003a814539a2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:34:54 crc kubenswrapper[5108]: I0104 00:34:54.600056 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/beab3683-44e4-49e8-998d-003a814539a2-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:34:55 crc kubenswrapper[5108]: I0104 00:34:55.376840 5108 generic.go:358] "Generic (PLEG): container finished" podID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerID="d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081" exitCode=0 Jan 04 00:34:55 crc kubenswrapper[5108]: I0104 00:34:55.376969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8f52" event={"ID":"6c7c8e14-16bd-46d2-9014-752b77bb29d1","Type":"ContainerDied","Data":"d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081"} Jan 04 00:34:56 crc kubenswrapper[5108]: I0104 00:34:56.391796 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8f52" event={"ID":"6c7c8e14-16bd-46d2-9014-752b77bb29d1","Type":"ContainerStarted","Data":"3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93"} Jan 04 00:34:56 crc kubenswrapper[5108]: I0104 00:34:56.427447 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l8f52" podStartSLOduration=4.526051246 podStartE2EDuration="5.427355299s" podCreationTimestamp="2026-01-04 00:34:51 +0000 UTC" firstStartedPulling="2026-01-04 00:34:53.354704299 +0000 UTC m=+1467.343269395" lastFinishedPulling="2026-01-04 00:34:54.256008362 +0000 UTC m=+1468.244573448" observedRunningTime="2026-01-04 00:34:56.421078206 +0000 UTC m=+1470.409643312" watchObservedRunningTime="2026-01-04 00:34:56.427355299 +0000 UTC m=+1470.415920385" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.775954 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.776826 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="beab3683-44e4-49e8-998d-003a814539a2" containerName="manage-dockerfile" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.776846 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="beab3683-44e4-49e8-998d-003a814539a2" containerName="manage-dockerfile" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.776862 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="beab3683-44e4-49e8-998d-003a814539a2" containerName="docker-build" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.776870 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="beab3683-44e4-49e8-998d-003a814539a2" containerName="docker-build" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.776918 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="beab3683-44e4-49e8-998d-003a814539a2" containerName="git-clone" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.776926 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="beab3683-44e4-49e8-998d-003a814539a2" containerName="git-clone" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.777044 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="beab3683-44e4-49e8-998d-003a814539a2" containerName="docker-build" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.978416 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.978700 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.982481 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.984276 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.984313 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xhpgk\"" Jan 04 00:34:58 crc kubenswrapper[5108]: I0104 00:34:58.985654 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.027847 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.027910 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028020 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028082 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028128 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028155 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028184 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028236 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028261 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028298 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028337 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z5n7\" (UniqueName: \"kubernetes.io/projected/e4c2e984-3eff-40ee-8908-c649820966dd-kube-api-access-8z5n7\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.028415 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.130562 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.130755 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.130762 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.130787 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.130939 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.130987 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131013 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131036 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131062 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131090 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131109 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131140 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131267 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131406 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131437 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8z5n7\" (UniqueName: \"kubernetes.io/projected/e4c2e984-3eff-40ee-8908-c649820966dd-kube-api-access-8z5n7\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131561 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.131848 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.132144 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.132262 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.132261 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.132863 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.140673 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.144036 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.151523 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z5n7\" (UniqueName: \"kubernetes.io/projected/e4c2e984-3eff-40ee-8908-c649820966dd-kube-api-access-8z5n7\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.296491 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:34:59 crc kubenswrapper[5108]: I0104 00:34:59.578596 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 04 00:34:59 crc kubenswrapper[5108]: W0104 00:34:59.591086 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4c2e984_3eff_40ee_8908_c649820966dd.slice/crio-0ba1692cd3503a85c07b284f0c62591bb3b8df1c56cc2fdd29f42e2c3ab56408 WatchSource:0}: Error finding container 0ba1692cd3503a85c07b284f0c62591bb3b8df1c56cc2fdd29f42e2c3ab56408: Status 404 returned error can't find the container with id 0ba1692cd3503a85c07b284f0c62591bb3b8df1c56cc2fdd29f42e2c3ab56408 Jan 04 00:35:00 crc kubenswrapper[5108]: I0104 00:35:00.430736 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e4c2e984-3eff-40ee-8908-c649820966dd","Type":"ContainerStarted","Data":"0ba1692cd3503a85c07b284f0c62591bb3b8df1c56cc2fdd29f42e2c3ab56408"} Jan 04 00:35:01 crc kubenswrapper[5108]: I0104 00:35:01.439691 5108 generic.go:358] "Generic (PLEG): container finished" podID="e4c2e984-3eff-40ee-8908-c649820966dd" containerID="d26b2511550ce9d7ba0c36946af4e1291bdaa4129b990a4d1947ed3204c5b93e" exitCode=0 Jan 04 00:35:01 crc kubenswrapper[5108]: I0104 00:35:01.439773 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e4c2e984-3eff-40ee-8908-c649820966dd","Type":"ContainerDied","Data":"d26b2511550ce9d7ba0c36946af4e1291bdaa4129b990a4d1947ed3204c5b93e"} Jan 04 00:35:01 crc kubenswrapper[5108]: I0104 00:35:01.876966 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:35:01 crc kubenswrapper[5108]: I0104 00:35:01.877538 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:35:01 crc kubenswrapper[5108]: I0104 00:35:01.924023 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:35:02 crc kubenswrapper[5108]: I0104 00:35:02.458540 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e4c2e984-3eff-40ee-8908-c649820966dd","Type":"ContainerStarted","Data":"0e7dacf48afcec7da4b2c886eb837ff652b3cca438e221edad8451750df9b58a"} Jan 04 00:35:02 crc kubenswrapper[5108]: I0104 00:35:02.491123 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:35:02 crc kubenswrapper[5108]: I0104 00:35:02.543715 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8f52"] Jan 04 00:35:03 crc kubenswrapper[5108]: I0104 00:35:03.487318 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=5.487287616 podStartE2EDuration="5.487287616s" podCreationTimestamp="2026-01-04 00:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:35:03.482415753 +0000 UTC m=+1477.470980869" watchObservedRunningTime="2026-01-04 00:35:03.487287616 +0000 UTC m=+1477.475852702" Jan 04 00:35:04 crc kubenswrapper[5108]: I0104 00:35:04.465468 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l8f52" podUID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerName="registry-server" containerID="cri-o://3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93" gracePeriod=2 Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.334135 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.472009 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-utilities\") pod \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.472359 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-catalog-content\") pod \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.472403 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7898\" (UniqueName: \"kubernetes.io/projected/6c7c8e14-16bd-46d2-9014-752b77bb29d1-kube-api-access-z7898\") pod \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\" (UID: \"6c7c8e14-16bd-46d2-9014-752b77bb29d1\") " Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.474148 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-utilities" (OuterVolumeSpecName: "utilities") pod "6c7c8e14-16bd-46d2-9014-752b77bb29d1" (UID: "6c7c8e14-16bd-46d2-9014-752b77bb29d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.476304 5108 generic.go:358] "Generic (PLEG): container finished" podID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerID="3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93" exitCode=0 Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.476686 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8f52" event={"ID":"6c7c8e14-16bd-46d2-9014-752b77bb29d1","Type":"ContainerDied","Data":"3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93"} Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.476738 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8f52" event={"ID":"6c7c8e14-16bd-46d2-9014-752b77bb29d1","Type":"ContainerDied","Data":"03045e649eb732638b7f8dc38fe5d6245aafce36001c5541e82741d7c99ab813"} Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.476766 5108 scope.go:117] "RemoveContainer" containerID="3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.477024 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8f52" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.483948 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7c8e14-16bd-46d2-9014-752b77bb29d1-kube-api-access-z7898" (OuterVolumeSpecName: "kube-api-access-z7898") pod "6c7c8e14-16bd-46d2-9014-752b77bb29d1" (UID: "6c7c8e14-16bd-46d2-9014-752b77bb29d1"). InnerVolumeSpecName "kube-api-access-z7898". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.520148 5108 scope.go:117] "RemoveContainer" containerID="d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.543660 5108 scope.go:117] "RemoveContainer" containerID="71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.565538 5108 scope.go:117] "RemoveContainer" containerID="3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93" Jan 04 00:35:05 crc kubenswrapper[5108]: E0104 00:35:05.566004 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93\": container with ID starting with 3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93 not found: ID does not exist" containerID="3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.566055 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93"} err="failed to get container status \"3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93\": rpc error: code = NotFound desc = could not find container \"3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93\": container with ID starting with 3a749cbef8cdd611b188e8b6d3657a1edf2575a22dc4b04a4f40883584d27a93 not found: ID does not exist" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.566089 5108 scope.go:117] "RemoveContainer" containerID="d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081" Jan 04 00:35:05 crc kubenswrapper[5108]: E0104 00:35:05.566662 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081\": container with ID starting with d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081 not found: ID does not exist" containerID="d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.566686 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081"} err="failed to get container status \"d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081\": rpc error: code = NotFound desc = could not find container \"d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081\": container with ID starting with d55bdeea31ce1a7b703f3ab76b54154ca2f4b5cf9afc42ef111f4be0f0707081 not found: ID does not exist" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.566706 5108 scope.go:117] "RemoveContainer" containerID="71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d" Jan 04 00:35:05 crc kubenswrapper[5108]: E0104 00:35:05.567011 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d\": container with ID starting with 71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d not found: ID does not exist" containerID="71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.567079 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d"} err="failed to get container status \"71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d\": rpc error: code = NotFound desc = could not find container \"71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d\": container with ID starting with 71c92ff39b0d7c8a225e3070aa72f78fc1e8794f42ac0def0bd7affe31ee296d not found: ID does not exist" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.575170 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z7898\" (UniqueName: \"kubernetes.io/projected/6c7c8e14-16bd-46d2-9014-752b77bb29d1-kube-api-access-z7898\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.575221 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.608527 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c7c8e14-16bd-46d2-9014-752b77bb29d1" (UID: "6c7c8e14-16bd-46d2-9014-752b77bb29d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.676464 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7c8e14-16bd-46d2-9014-752b77bb29d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.817477 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8f52"] Jan 04 00:35:05 crc kubenswrapper[5108]: I0104 00:35:05.823888 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l8f52"] Jan 04 00:35:06 crc kubenswrapper[5108]: I0104 00:35:06.459295 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" path="/var/lib/kubelet/pods/6c7c8e14-16bd-46d2-9014-752b77bb29d1/volumes" Jan 04 00:35:08 crc kubenswrapper[5108]: I0104 00:35:08.846149 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 04 00:35:08 crc kubenswrapper[5108]: I0104 00:35:08.846883 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="e4c2e984-3eff-40ee-8908-c649820966dd" containerName="docker-build" containerID="cri-o://0e7dacf48afcec7da4b2c886eb837ff652b3cca438e221edad8451750df9b58a" gracePeriod=30 Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.492146 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.493040 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerName="extract-utilities" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.493057 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerName="extract-utilities" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.493068 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerName="extract-content" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.493074 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerName="extract-content" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.493109 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerName="registry-server" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.493116 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerName="registry-server" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.493249 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="6c7c8e14-16bd-46d2-9014-752b77bb29d1" containerName="registry-server" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.842854 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.843149 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.848339 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.848716 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.848741 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.864798 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.864846 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.864874 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.864914 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.865229 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.865365 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.865489 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.865633 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.865732 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.865775 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.865820 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blcx9\" (UniqueName: \"kubernetes.io/projected/15d44445-2e80-4d01-a36e-9b7b0f9f0981-kube-api-access-blcx9\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.865943 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.967364 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.967435 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.967462 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.967499 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.967524 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.967565 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.967597 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.967676 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968167 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968241 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968339 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-blcx9\" (UniqueName: \"kubernetes.io/projected/15d44445-2e80-4d01-a36e-9b7b0f9f0981-kube-api-access-blcx9\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968471 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968587 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968752 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968793 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968610 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968856 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.968893 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.969106 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.969589 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.969661 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.975963 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.975976 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:10 crc kubenswrapper[5108]: I0104 00:35:10.988590 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-blcx9\" (UniqueName: \"kubernetes.io/projected/15d44445-2e80-4d01-a36e-9b7b0f9f0981-kube-api-access-blcx9\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:11 crc kubenswrapper[5108]: I0104 00:35:11.177657 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:35:11 crc kubenswrapper[5108]: I0104 00:35:11.403864 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 04 00:35:11 crc kubenswrapper[5108]: I0104 00:35:11.524846 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"15d44445-2e80-4d01-a36e-9b7b0f9f0981","Type":"ContainerStarted","Data":"8ae3ed4a22c7976c1e22ddb8987a6cbd33746be709c660cac35fd01cd14ef2f5"} Jan 04 00:35:11 crc kubenswrapper[5108]: I0104 00:35:11.527661 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_e4c2e984-3eff-40ee-8908-c649820966dd/docker-build/0.log" Jan 04 00:35:11 crc kubenswrapper[5108]: I0104 00:35:11.528205 5108 generic.go:358] "Generic (PLEG): container finished" podID="e4c2e984-3eff-40ee-8908-c649820966dd" containerID="0e7dacf48afcec7da4b2c886eb837ff652b3cca438e221edad8451750df9b58a" exitCode=1 Jan 04 00:35:11 crc kubenswrapper[5108]: I0104 00:35:11.528346 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e4c2e984-3eff-40ee-8908-c649820966dd","Type":"ContainerDied","Data":"0e7dacf48afcec7da4b2c886eb837ff652b3cca438e221edad8451750df9b58a"} Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.353962 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_e4c2e984-3eff-40ee-8908-c649820966dd/docker-build/0.log" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.355130 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396188 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-ca-bundles\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396321 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-system-configs\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396375 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-buildcachedir\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396435 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-buildworkdir\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396481 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-build-blob-cache\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396498 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z5n7\" (UniqueName: \"kubernetes.io/projected/e4c2e984-3eff-40ee-8908-c649820966dd-kube-api-access-8z5n7\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396538 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-proxy-ca-bundles\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396554 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-node-pullsecrets\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396581 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-root\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396640 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-run\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396682 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-push\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.396706 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-pull\") pod \"e4c2e984-3eff-40ee-8908-c649820966dd\" (UID: \"e4c2e984-3eff-40ee-8908-c649820966dd\") " Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.397132 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.397445 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.397611 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.397881 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.398584 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.399239 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.399759 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.400188 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.405513 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c2e984-3eff-40ee-8908-c649820966dd-kube-api-access-8z5n7" (OuterVolumeSpecName: "kube-api-access-8z5n7") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "kube-api-access-8z5n7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.406404 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.406431 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.464306 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e4c2e984-3eff-40ee-8908-c649820966dd" (UID: "e4c2e984-3eff-40ee-8908-c649820966dd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.498808 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499191 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499320 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499402 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499473 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499561 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8z5n7\" (UniqueName: \"kubernetes.io/projected/e4c2e984-3eff-40ee-8908-c649820966dd-kube-api-access-8z5n7\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499644 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e4c2e984-3eff-40ee-8908-c649820966dd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499715 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e4c2e984-3eff-40ee-8908-c649820966dd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499796 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499873 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e4c2e984-3eff-40ee-8908-c649820966dd-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.499950 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.500021 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/e4c2e984-3eff-40ee-8908-c649820966dd-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.539067 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_e4c2e984-3eff-40ee-8908-c649820966dd/docker-build/0.log" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.539756 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"e4c2e984-3eff-40ee-8908-c649820966dd","Type":"ContainerDied","Data":"0ba1692cd3503a85c07b284f0c62591bb3b8df1c56cc2fdd29f42e2c3ab56408"} Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.539785 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.539834 5108 scope.go:117] "RemoveContainer" containerID="0e7dacf48afcec7da4b2c886eb837ff652b3cca438e221edad8451750df9b58a" Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.578372 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.586658 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 04 00:35:12 crc kubenswrapper[5108]: I0104 00:35:12.589263 5108 scope.go:117] "RemoveContainer" containerID="d26b2511550ce9d7ba0c36946af4e1291bdaa4129b990a4d1947ed3204c5b93e" Jan 04 00:35:13 crc kubenswrapper[5108]: I0104 00:35:13.553243 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"15d44445-2e80-4d01-a36e-9b7b0f9f0981","Type":"ContainerStarted","Data":"81c0534f125aa99a04474a91e0868541ded895eeb194ec7f4bf00ba5d3b7a819"} Jan 04 00:35:14 crc kubenswrapper[5108]: I0104 00:35:14.459130 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c2e984-3eff-40ee-8908-c649820966dd" path="/var/lib/kubelet/pods/e4c2e984-3eff-40ee-8908-c649820966dd/volumes" Jan 04 00:35:14 crc kubenswrapper[5108]: I0104 00:35:14.563884 5108 generic.go:358] "Generic (PLEG): container finished" podID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerID="81c0534f125aa99a04474a91e0868541ded895eeb194ec7f4bf00ba5d3b7a819" exitCode=0 Jan 04 00:35:14 crc kubenswrapper[5108]: I0104 00:35:14.563971 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"15d44445-2e80-4d01-a36e-9b7b0f9f0981","Type":"ContainerDied","Data":"81c0534f125aa99a04474a91e0868541ded895eeb194ec7f4bf00ba5d3b7a819"} Jan 04 00:35:15 crc kubenswrapper[5108]: I0104 00:35:15.601138 5108 generic.go:358] "Generic (PLEG): container finished" podID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerID="ba73c8531b027290ff55a378a89c05bda40b0521a544be98f525e7861062c83d" exitCode=0 Jan 04 00:35:15 crc kubenswrapper[5108]: I0104 00:35:15.601229 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"15d44445-2e80-4d01-a36e-9b7b0f9f0981","Type":"ContainerDied","Data":"ba73c8531b027290ff55a378a89c05bda40b0521a544be98f525e7861062c83d"} Jan 04 00:35:15 crc kubenswrapper[5108]: I0104 00:35:15.632315 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_15d44445-2e80-4d01-a36e-9b7b0f9f0981/manage-dockerfile/0.log" Jan 04 00:35:16 crc kubenswrapper[5108]: I0104 00:35:16.612449 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"15d44445-2e80-4d01-a36e-9b7b0f9f0981","Type":"ContainerStarted","Data":"445654ade815f11bd1a50a81937fbee515471a6663de35abc2d754ffff2493d3"} Jan 04 00:35:16 crc kubenswrapper[5108]: I0104 00:35:16.641277 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=6.641245531 podStartE2EDuration="6.641245531s" podCreationTimestamp="2026-01-04 00:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:35:16.635655497 +0000 UTC m=+1490.624220593" watchObservedRunningTime="2026-01-04 00:35:16.641245531 +0000 UTC m=+1490.629810617" Jan 04 00:35:24 crc kubenswrapper[5108]: I0104 00:35:24.917293 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:35:24 crc kubenswrapper[5108]: I0104 00:35:24.918352 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:35:27 crc kubenswrapper[5108]: I0104 00:35:27.034022 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:35:27 crc kubenswrapper[5108]: I0104 00:35:27.034066 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:35:27 crc kubenswrapper[5108]: I0104 00:35:27.063874 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:35:27 crc kubenswrapper[5108]: I0104 00:35:27.063907 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:35:54 crc kubenswrapper[5108]: I0104 00:35:54.918023 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:35:54 crc kubenswrapper[5108]: I0104 00:35:54.918898 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.139978 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458116-75t67"] Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.142116 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4c2e984-3eff-40ee-8908-c649820966dd" containerName="docker-build" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.142137 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c2e984-3eff-40ee-8908-c649820966dd" containerName="docker-build" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.142167 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4c2e984-3eff-40ee-8908-c649820966dd" containerName="manage-dockerfile" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.142175 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c2e984-3eff-40ee-8908-c649820966dd" containerName="manage-dockerfile" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.142373 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4c2e984-3eff-40ee-8908-c649820966dd" containerName="docker-build" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.147105 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458116-75t67" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.150928 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.151029 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.151048 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.152639 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458116-75t67"] Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.193163 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qfv\" (UniqueName: \"kubernetes.io/projected/a1a303b7-6544-4341-8518-88b23ca64ce5-kube-api-access-27qfv\") pod \"auto-csr-approver-29458116-75t67\" (UID: \"a1a303b7-6544-4341-8518-88b23ca64ce5\") " pod="openshift-infra/auto-csr-approver-29458116-75t67" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.294014 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27qfv\" (UniqueName: \"kubernetes.io/projected/a1a303b7-6544-4341-8518-88b23ca64ce5-kube-api-access-27qfv\") pod \"auto-csr-approver-29458116-75t67\" (UID: \"a1a303b7-6544-4341-8518-88b23ca64ce5\") " pod="openshift-infra/auto-csr-approver-29458116-75t67" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.318187 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qfv\" (UniqueName: \"kubernetes.io/projected/a1a303b7-6544-4341-8518-88b23ca64ce5-kube-api-access-27qfv\") pod \"auto-csr-approver-29458116-75t67\" (UID: \"a1a303b7-6544-4341-8518-88b23ca64ce5\") " pod="openshift-infra/auto-csr-approver-29458116-75t67" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.474264 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458116-75t67" Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.695067 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458116-75t67"] Jan 04 00:36:00 crc kubenswrapper[5108]: I0104 00:36:00.974070 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458116-75t67" event={"ID":"a1a303b7-6544-4341-8518-88b23ca64ce5","Type":"ContainerStarted","Data":"59b4d8854b6a8c55ad905a7ffb27dfa9941f83fd4e238fd47c63d7d055f061a6"} Jan 04 00:36:02 crc kubenswrapper[5108]: I0104 00:36:02.991930 5108 generic.go:358] "Generic (PLEG): container finished" podID="a1a303b7-6544-4341-8518-88b23ca64ce5" containerID="3aace4c3515de9f0692b8022799b9f32f64aa01111fa4a5dfa8f79f04de10a6d" exitCode=0 Jan 04 00:36:02 crc kubenswrapper[5108]: I0104 00:36:02.992000 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458116-75t67" event={"ID":"a1a303b7-6544-4341-8518-88b23ca64ce5","Type":"ContainerDied","Data":"3aace4c3515de9f0692b8022799b9f32f64aa01111fa4a5dfa8f79f04de10a6d"} Jan 04 00:36:04 crc kubenswrapper[5108]: I0104 00:36:04.424384 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458116-75t67" Jan 04 00:36:04 crc kubenswrapper[5108]: I0104 00:36:04.465278 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27qfv\" (UniqueName: \"kubernetes.io/projected/a1a303b7-6544-4341-8518-88b23ca64ce5-kube-api-access-27qfv\") pod \"a1a303b7-6544-4341-8518-88b23ca64ce5\" (UID: \"a1a303b7-6544-4341-8518-88b23ca64ce5\") " Jan 04 00:36:04 crc kubenswrapper[5108]: I0104 00:36:04.482706 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a303b7-6544-4341-8518-88b23ca64ce5-kube-api-access-27qfv" (OuterVolumeSpecName: "kube-api-access-27qfv") pod "a1a303b7-6544-4341-8518-88b23ca64ce5" (UID: "a1a303b7-6544-4341-8518-88b23ca64ce5"). InnerVolumeSpecName "kube-api-access-27qfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:36:04 crc kubenswrapper[5108]: I0104 00:36:04.568414 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27qfv\" (UniqueName: \"kubernetes.io/projected/a1a303b7-6544-4341-8518-88b23ca64ce5-kube-api-access-27qfv\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:05 crc kubenswrapper[5108]: I0104 00:36:05.010630 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458116-75t67" event={"ID":"a1a303b7-6544-4341-8518-88b23ca64ce5","Type":"ContainerDied","Data":"59b4d8854b6a8c55ad905a7ffb27dfa9941f83fd4e238fd47c63d7d055f061a6"} Jan 04 00:36:05 crc kubenswrapper[5108]: I0104 00:36:05.010703 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b4d8854b6a8c55ad905a7ffb27dfa9941f83fd4e238fd47c63d7d055f061a6" Jan 04 00:36:05 crc kubenswrapper[5108]: I0104 00:36:05.010811 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458116-75t67" Jan 04 00:36:05 crc kubenswrapper[5108]: I0104 00:36:05.491620 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458110-zcgsn"] Jan 04 00:36:05 crc kubenswrapper[5108]: I0104 00:36:05.496448 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458110-zcgsn"] Jan 04 00:36:06 crc kubenswrapper[5108]: I0104 00:36:06.462473 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d010b6a7-84b0-4f46-9be2-a1c621bdbc11" path="/var/lib/kubelet/pods/d010b6a7-84b0-4f46-9be2-a1c621bdbc11/volumes" Jan 04 00:36:24 crc kubenswrapper[5108]: I0104 00:36:24.917143 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:36:24 crc kubenswrapper[5108]: I0104 00:36:24.918178 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:36:24 crc kubenswrapper[5108]: I0104 00:36:24.918292 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:36:24 crc kubenswrapper[5108]: I0104 00:36:24.919221 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71e1a23e6a33296265d8312485d92dabf3435cdf7d47549db16b40e0523240ea"} pod="openshift-machine-config-operator/machine-config-daemon-njl5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 04 00:36:24 crc kubenswrapper[5108]: I0104 00:36:24.919305 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" containerID="cri-o://71e1a23e6a33296265d8312485d92dabf3435cdf7d47549db16b40e0523240ea" gracePeriod=600 Jan 04 00:36:25 crc kubenswrapper[5108]: I0104 00:36:25.182904 5108 generic.go:358] "Generic (PLEG): container finished" podID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerID="71e1a23e6a33296265d8312485d92dabf3435cdf7d47549db16b40e0523240ea" exitCode=0 Jan 04 00:36:25 crc kubenswrapper[5108]: I0104 00:36:25.183335 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerDied","Data":"71e1a23e6a33296265d8312485d92dabf3435cdf7d47549db16b40e0523240ea"} Jan 04 00:36:25 crc kubenswrapper[5108]: I0104 00:36:25.183563 5108 scope.go:117] "RemoveContainer" containerID="bad0ea277fd94911974fbd9c4fb75c82a3196517d30a4e258eccd8f7cc79a379" Jan 04 00:36:26 crc kubenswrapper[5108]: I0104 00:36:26.195651 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57"} Jan 04 00:36:31 crc kubenswrapper[5108]: I0104 00:36:31.051815 5108 scope.go:117] "RemoveContainer" containerID="b5c58b4a6349954c323a68194cebb5516510116ad0b63767146a36e3dce7f6b0" Jan 04 00:36:33 crc kubenswrapper[5108]: I0104 00:36:33.251802 5108 generic.go:358] "Generic (PLEG): container finished" podID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerID="445654ade815f11bd1a50a81937fbee515471a6663de35abc2d754ffff2493d3" exitCode=0 Jan 04 00:36:33 crc kubenswrapper[5108]: I0104 00:36:33.251877 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"15d44445-2e80-4d01-a36e-9b7b0f9f0981","Type":"ContainerDied","Data":"445654ade815f11bd1a50a81937fbee515471a6663de35abc2d754ffff2493d3"} Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.574113 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.703999 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-proxy-ca-bundles\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704077 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-root\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704117 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildworkdir\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704141 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-push\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704172 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-pull\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704250 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-blob-cache\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704290 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-system-configs\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704323 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blcx9\" (UniqueName: \"kubernetes.io/projected/15d44445-2e80-4d01-a36e-9b7b0f9f0981-kube-api-access-blcx9\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704380 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-node-pullsecrets\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704445 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-ca-bundles\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704475 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-run\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704516 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildcachedir\") pod \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\" (UID: \"15d44445-2e80-4d01-a36e-9b7b0f9f0981\") " Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704597 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704637 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704763 5108 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.704774 5108 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.705424 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.705466 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.706902 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.711347 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.722438 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d44445-2e80-4d01-a36e-9b7b0f9f0981-kube-api-access-blcx9" (OuterVolumeSpecName: "kube-api-access-blcx9") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "kube-api-access-blcx9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.722574 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-pull" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-pull") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "builder-dockercfg-xhpgk-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.722674 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.723456 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-push" (OuterVolumeSpecName: "builder-dockercfg-xhpgk-push") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "builder-dockercfg-xhpgk-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.807138 5108 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.807643 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.807741 5108 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.807829 5108 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.807920 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-push\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-push\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.808003 5108 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xhpgk-pull\" (UniqueName: \"kubernetes.io/secret/15d44445-2e80-4d01-a36e-9b7b0f9f0981-builder-dockercfg-xhpgk-pull\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.808085 5108 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.808159 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-blcx9\" (UniqueName: \"kubernetes.io/projected/15d44445-2e80-4d01-a36e-9b7b0f9f0981-kube-api-access-blcx9\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.812312 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:36:34 crc kubenswrapper[5108]: I0104 00:36:34.909817 5108 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:35 crc kubenswrapper[5108]: I0104 00:36:35.272392 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"15d44445-2e80-4d01-a36e-9b7b0f9f0981","Type":"ContainerDied","Data":"8ae3ed4a22c7976c1e22ddb8987a6cbd33746be709c660cac35fd01cd14ef2f5"} Jan 04 00:36:35 crc kubenswrapper[5108]: I0104 00:36:35.272453 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ae3ed4a22c7976c1e22ddb8987a6cbd33746be709c660cac35fd01cd14ef2f5" Jan 04 00:36:35 crc kubenswrapper[5108]: I0104 00:36:35.272580 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 04 00:36:35 crc kubenswrapper[5108]: I0104 00:36:35.555088 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "15d44445-2e80-4d01-a36e-9b7b0f9f0981" (UID: "15d44445-2e80-4d01-a36e-9b7b0f9f0981"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:36:35 crc kubenswrapper[5108]: I0104 00:36:35.623523 5108 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15d44445-2e80-4d01-a36e-9b7b0f9f0981-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.769845 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-6668876698-qlfqx"] Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771597 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerName="manage-dockerfile" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771617 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerName="manage-dockerfile" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771631 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a1a303b7-6544-4341-8518-88b23ca64ce5" containerName="oc" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771636 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a303b7-6544-4341-8518-88b23ca64ce5" containerName="oc" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771658 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerName="git-clone" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771664 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerName="git-clone" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771684 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerName="docker-build" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771697 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerName="docker-build" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771838 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="a1a303b7-6544-4341-8518-88b23ca64ce5" containerName="oc" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.771856 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="15d44445-2e80-4d01-a36e-9b7b0f9f0981" containerName="docker-build" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.776615 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.779822 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-ljs2p\"" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.786181 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-6668876698-qlfqx"] Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.887769 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/6eca10e1-2858-49cb-97a4-a53149ea7ceb-runner\") pod \"smart-gateway-operator-6668876698-qlfqx\" (UID: \"6eca10e1-2858-49cb-97a4-a53149ea7ceb\") " pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.887852 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf9ln\" (UniqueName: \"kubernetes.io/projected/6eca10e1-2858-49cb-97a4-a53149ea7ceb-kube-api-access-bf9ln\") pod \"smart-gateway-operator-6668876698-qlfqx\" (UID: \"6eca10e1-2858-49cb-97a4-a53149ea7ceb\") " pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.989708 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bf9ln\" (UniqueName: \"kubernetes.io/projected/6eca10e1-2858-49cb-97a4-a53149ea7ceb-kube-api-access-bf9ln\") pod \"smart-gateway-operator-6668876698-qlfqx\" (UID: \"6eca10e1-2858-49cb-97a4-a53149ea7ceb\") " pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.989903 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/6eca10e1-2858-49cb-97a4-a53149ea7ceb-runner\") pod \"smart-gateway-operator-6668876698-qlfqx\" (UID: \"6eca10e1-2858-49cb-97a4-a53149ea7ceb\") " pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" Jan 04 00:36:39 crc kubenswrapper[5108]: I0104 00:36:39.990478 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/6eca10e1-2858-49cb-97a4-a53149ea7ceb-runner\") pod \"smart-gateway-operator-6668876698-qlfqx\" (UID: \"6eca10e1-2858-49cb-97a4-a53149ea7ceb\") " pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" Jan 04 00:36:40 crc kubenswrapper[5108]: I0104 00:36:40.011165 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf9ln\" (UniqueName: \"kubernetes.io/projected/6eca10e1-2858-49cb-97a4-a53149ea7ceb-kube-api-access-bf9ln\") pod \"smart-gateway-operator-6668876698-qlfqx\" (UID: \"6eca10e1-2858-49cb-97a4-a53149ea7ceb\") " pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" Jan 04 00:36:40 crc kubenswrapper[5108]: I0104 00:36:40.093859 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" Jan 04 00:36:40 crc kubenswrapper[5108]: I0104 00:36:40.366746 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-6668876698-qlfqx"] Jan 04 00:36:41 crc kubenswrapper[5108]: I0104 00:36:41.337572 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" event={"ID":"6eca10e1-2858-49cb-97a4-a53149ea7ceb","Type":"ContainerStarted","Data":"c6cf32b9165d094e6b6c4c6901f4d23571c0731fda058d7e30f2a37a04cc5520"} Jan 04 00:36:43 crc kubenswrapper[5108]: I0104 00:36:43.987122 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-845d76977f-skznp"] Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.007280 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-845d76977f-skznp"] Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.007399 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.011314 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-tx8ql\"" Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.086576 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gct\" (UniqueName: \"kubernetes.io/projected/90824245-ac48-46b3-890a-0aff0a7a62a1-kube-api-access-25gct\") pod \"service-telemetry-operator-845d76977f-skznp\" (UID: \"90824245-ac48-46b3-890a-0aff0a7a62a1\") " pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.087001 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/90824245-ac48-46b3-890a-0aff0a7a62a1-runner\") pod \"service-telemetry-operator-845d76977f-skznp\" (UID: \"90824245-ac48-46b3-890a-0aff0a7a62a1\") " pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.189401 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/90824245-ac48-46b3-890a-0aff0a7a62a1-runner\") pod \"service-telemetry-operator-845d76977f-skznp\" (UID: \"90824245-ac48-46b3-890a-0aff0a7a62a1\") " pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.189632 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-25gct\" (UniqueName: \"kubernetes.io/projected/90824245-ac48-46b3-890a-0aff0a7a62a1-kube-api-access-25gct\") pod \"service-telemetry-operator-845d76977f-skznp\" (UID: \"90824245-ac48-46b3-890a-0aff0a7a62a1\") " pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.189898 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/90824245-ac48-46b3-890a-0aff0a7a62a1-runner\") pod \"service-telemetry-operator-845d76977f-skznp\" (UID: \"90824245-ac48-46b3-890a-0aff0a7a62a1\") " pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.234875 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-25gct\" (UniqueName: \"kubernetes.io/projected/90824245-ac48-46b3-890a-0aff0a7a62a1-kube-api-access-25gct\") pod \"service-telemetry-operator-845d76977f-skznp\" (UID: \"90824245-ac48-46b3-890a-0aff0a7a62a1\") " pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" Jan 04 00:36:44 crc kubenswrapper[5108]: I0104 00:36:44.352973 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" Jan 04 00:36:54 crc kubenswrapper[5108]: I0104 00:36:54.132649 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-845d76977f-skznp"] Jan 04 00:36:58 crc kubenswrapper[5108]: I0104 00:36:58.505969 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" event={"ID":"90824245-ac48-46b3-890a-0aff0a7a62a1","Type":"ContainerStarted","Data":"05a27603020c0f488e73607c034ee8f7976016bba9422ea1e1fc508f8753091e"} Jan 04 00:36:59 crc kubenswrapper[5108]: I0104 00:36:59.519989 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" event={"ID":"6eca10e1-2858-49cb-97a4-a53149ea7ceb","Type":"ContainerStarted","Data":"52ecbb6ceae6355e2ea49dc3d12e1780f3690158f988d689f9afe41936a9478c"} Jan 04 00:36:59 crc kubenswrapper[5108]: I0104 00:36:59.542158 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-6668876698-qlfqx" podStartSLOduration=2.337076843 podStartE2EDuration="20.541931065s" podCreationTimestamp="2026-01-04 00:36:39 +0000 UTC" firstStartedPulling="2026-01-04 00:36:40.386798226 +0000 UTC m=+1574.375363322" lastFinishedPulling="2026-01-04 00:36:58.591652458 +0000 UTC m=+1592.580217544" observedRunningTime="2026-01-04 00:36:59.539629841 +0000 UTC m=+1593.528194947" watchObservedRunningTime="2026-01-04 00:36:59.541931065 +0000 UTC m=+1593.530496151" Jan 04 00:37:05 crc kubenswrapper[5108]: I0104 00:37:05.581021 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" event={"ID":"90824245-ac48-46b3-890a-0aff0a7a62a1","Type":"ContainerStarted","Data":"0cf0701c0368638e66a40ba2de18785a5f6d97bf35f22dff0d9e88f25b99a343"} Jan 04 00:37:05 crc kubenswrapper[5108]: I0104 00:37:05.604385 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-845d76977f-skznp" podStartSLOduration=15.933835102 podStartE2EDuration="22.604360566s" podCreationTimestamp="2026-01-04 00:36:43 +0000 UTC" firstStartedPulling="2026-01-04 00:36:58.082614016 +0000 UTC m=+1592.071179102" lastFinishedPulling="2026-01-04 00:37:04.75313949 +0000 UTC m=+1598.741704566" observedRunningTime="2026-01-04 00:37:05.600828649 +0000 UTC m=+1599.589393745" watchObservedRunningTime="2026-01-04 00:37:05.604360566 +0000 UTC m=+1599.592925642" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.266520 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8blx7"] Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.288307 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8blx7"] Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.288513 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.293136 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.293662 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.295093 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.295360 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.295535 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.295715 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.295911 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-jvwrb\"" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.374463 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.374885 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.375047 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.375177 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.375327 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhl5m\" (UniqueName: \"kubernetes.io/projected/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-kube-api-access-dhl5m\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.376604 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-users\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.376887 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-config\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.478702 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.478764 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.478985 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dhl5m\" (UniqueName: \"kubernetes.io/projected/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-kube-api-access-dhl5m\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.479138 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-users\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.479407 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-config\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.479458 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.479489 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.481479 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-config\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.493642 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-users\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.494056 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.494396 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.494550 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.495309 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.508613 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhl5m\" (UniqueName: \"kubernetes.io/projected/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-kube-api-access-dhl5m\") pod \"default-interconnect-55bf8d5cb-8blx7\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:30 crc kubenswrapper[5108]: I0104 00:37:30.627500 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:37:31 crc kubenswrapper[5108]: I0104 00:37:31.084950 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8blx7"] Jan 04 00:37:31 crc kubenswrapper[5108]: I0104 00:37:31.793995 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" event={"ID":"dc18b015-2dc5-4ecf-a373-a9a04b7ab311","Type":"ContainerStarted","Data":"5be9c70d015635f1ebea5f85084101d4b24127c84423cf7631c351f4bcba3bbb"} Jan 04 00:37:36 crc kubenswrapper[5108]: I0104 00:37:36.835646 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" event={"ID":"dc18b015-2dc5-4ecf-a373-a9a04b7ab311","Type":"ContainerStarted","Data":"994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085"} Jan 04 00:37:36 crc kubenswrapper[5108]: I0104 00:37:36.859286 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" podStartSLOduration=1.542373386 podStartE2EDuration="6.859254074s" podCreationTimestamp="2026-01-04 00:37:30 +0000 UTC" firstStartedPulling="2026-01-04 00:37:31.092260507 +0000 UTC m=+1625.080825593" lastFinishedPulling="2026-01-04 00:37:36.409141195 +0000 UTC m=+1630.397706281" observedRunningTime="2026-01-04 00:37:36.856325103 +0000 UTC m=+1630.844890209" watchObservedRunningTime="2026-01-04 00:37:36.859254074 +0000 UTC m=+1630.847819150" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.698297 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.725961 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.726301 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.733682 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.734047 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.734055 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.734121 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.734294 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.733935 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.733909 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.734613 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.735736 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-546xk\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.735917 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.872699 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bbb51482-bfac-4350-9ec7-b9470cbf4b19-config-out\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.872781 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.872833 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78fk2\" (UniqueName: \"kubernetes.io/projected/bbb51482-bfac-4350-9ec7-b9470cbf4b19-kube-api-access-78fk2\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.872871 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.872901 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.872929 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.872956 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-config\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.873019 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.873050 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.873083 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-web-config\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.873129 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6fc8c56a-8595-4aad-8ea3-9da2a17b92c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fc8c56a-8595-4aad-8ea3-9da2a17b92c3\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.873162 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bbb51482-bfac-4350-9ec7-b9470cbf4b19-tls-assets\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.975298 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bbb51482-bfac-4350-9ec7-b9470cbf4b19-config-out\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.975384 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.975430 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-78fk2\" (UniqueName: \"kubernetes.io/projected/bbb51482-bfac-4350-9ec7-b9470cbf4b19-kube-api-access-78fk2\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.975454 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.975478 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: E0104 00:37:41.975702 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 04 00:37:41 crc kubenswrapper[5108]: E0104 00:37:41.975905 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-prometheus-proxy-tls podName:bbb51482-bfac-4350-9ec7-b9470cbf4b19 nodeName:}" failed. No retries permitted until 2026-01-04 00:37:42.475847184 +0000 UTC m=+1636.464412270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "bbb51482-bfac-4350-9ec7-b9470cbf4b19") : secret "default-prometheus-proxy-tls" not found Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.976117 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.976305 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-config\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.976595 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.976747 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.976886 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-web-config\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.977038 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-6fc8c56a-8595-4aad-8ea3-9da2a17b92c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fc8c56a-8595-4aad-8ea3-9da2a17b92c3\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.977184 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.977183 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bbb51482-bfac-4350-9ec7-b9470cbf4b19-tls-assets\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.977408 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.977426 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.977846 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb51482-bfac-4350-9ec7-b9470cbf4b19-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.983975 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.984029 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-6fc8c56a-8595-4aad-8ea3-9da2a17b92c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fc8c56a-8595-4aad-8ea3-9da2a17b92c3\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b854347d8f44dae3f654b00ba293324adc84f86287dc52f0e008ece8cddd2e6b/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.985590 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-config\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.996392 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bbb51482-bfac-4350-9ec7-b9470cbf4b19-tls-assets\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.998335 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-web-config\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:41 crc kubenswrapper[5108]: I0104 00:37:41.999950 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bbb51482-bfac-4350-9ec7-b9470cbf4b19-config-out\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:42 crc kubenswrapper[5108]: I0104 00:37:42.001385 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:42 crc kubenswrapper[5108]: I0104 00:37:42.007251 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-78fk2\" (UniqueName: \"kubernetes.io/projected/bbb51482-bfac-4350-9ec7-b9470cbf4b19-kube-api-access-78fk2\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:42 crc kubenswrapper[5108]: I0104 00:37:42.028658 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-6fc8c56a-8595-4aad-8ea3-9da2a17b92c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6fc8c56a-8595-4aad-8ea3-9da2a17b92c3\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:42 crc kubenswrapper[5108]: I0104 00:37:42.485663 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:42 crc kubenswrapper[5108]: E0104 00:37:42.485891 5108 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 04 00:37:42 crc kubenswrapper[5108]: E0104 00:37:42.486488 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-prometheus-proxy-tls podName:bbb51482-bfac-4350-9ec7-b9470cbf4b19 nodeName:}" failed. No retries permitted until 2026-01-04 00:37:43.48644284 +0000 UTC m=+1637.475007926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "bbb51482-bfac-4350-9ec7-b9470cbf4b19") : secret "default-prometheus-proxy-tls" not found Jan 04 00:37:43 crc kubenswrapper[5108]: I0104 00:37:43.500572 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:43 crc kubenswrapper[5108]: I0104 00:37:43.509024 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbb51482-bfac-4350-9ec7-b9470cbf4b19-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"bbb51482-bfac-4350-9ec7-b9470cbf4b19\") " pod="service-telemetry/prometheus-default-0" Jan 04 00:37:43 crc kubenswrapper[5108]: I0104 00:37:43.549921 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 04 00:37:43 crc kubenswrapper[5108]: I0104 00:37:43.791821 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 04 00:37:43 crc kubenswrapper[5108]: I0104 00:37:43.896427 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"bbb51482-bfac-4350-9ec7-b9470cbf4b19","Type":"ContainerStarted","Data":"d17b4a799b03612d3a329740a5c6086320aaf22b9def5aeea123a99f92c0046b"} Jan 04 00:37:47 crc kubenswrapper[5108]: I0104 00:37:47.942295 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"bbb51482-bfac-4350-9ec7-b9470cbf4b19","Type":"ContainerStarted","Data":"68343ea43a360ff5a70f3adeb96f106bec44c316e86a8ca46c2896574effb3a1"} Jan 04 00:37:52 crc kubenswrapper[5108]: I0104 00:37:52.894744 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-hjv6t"] Jan 04 00:37:52 crc kubenswrapper[5108]: I0104 00:37:52.927027 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-hjv6t"] Jan 04 00:37:52 crc kubenswrapper[5108]: I0104 00:37:52.927235 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-hjv6t" Jan 04 00:37:53 crc kubenswrapper[5108]: I0104 00:37:53.066540 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8vj\" (UniqueName: \"kubernetes.io/projected/f61d3277-40d7-4ac1-994c-e64ce83b3fe9-kube-api-access-9g8vj\") pod \"default-snmp-webhook-694dc457d5-hjv6t\" (UID: \"f61d3277-40d7-4ac1-994c-e64ce83b3fe9\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-hjv6t" Jan 04 00:37:53 crc kubenswrapper[5108]: I0104 00:37:53.168093 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9g8vj\" (UniqueName: \"kubernetes.io/projected/f61d3277-40d7-4ac1-994c-e64ce83b3fe9-kube-api-access-9g8vj\") pod \"default-snmp-webhook-694dc457d5-hjv6t\" (UID: \"f61d3277-40d7-4ac1-994c-e64ce83b3fe9\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-hjv6t" Jan 04 00:37:53 crc kubenswrapper[5108]: I0104 00:37:53.192312 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g8vj\" (UniqueName: \"kubernetes.io/projected/f61d3277-40d7-4ac1-994c-e64ce83b3fe9-kube-api-access-9g8vj\") pod \"default-snmp-webhook-694dc457d5-hjv6t\" (UID: \"f61d3277-40d7-4ac1-994c-e64ce83b3fe9\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-hjv6t" Jan 04 00:37:53 crc kubenswrapper[5108]: I0104 00:37:53.252963 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-hjv6t" Jan 04 00:37:53 crc kubenswrapper[5108]: I0104 00:37:53.497536 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-hjv6t"] Jan 04 00:37:53 crc kubenswrapper[5108]: W0104 00:37:53.519240 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf61d3277_40d7_4ac1_994c_e64ce83b3fe9.slice/crio-6f8888bf1ee02962e4439a34af9288e482c6fae78f9952b6e008d5a6c0076a2a WatchSource:0}: Error finding container 6f8888bf1ee02962e4439a34af9288e482c6fae78f9952b6e008d5a6c0076a2a: Status 404 returned error can't find the container with id 6f8888bf1ee02962e4439a34af9288e482c6fae78f9952b6e008d5a6c0076a2a Jan 04 00:37:54 crc kubenswrapper[5108]: I0104 00:37:54.009774 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-hjv6t" event={"ID":"f61d3277-40d7-4ac1-994c-e64ce83b3fe9","Type":"ContainerStarted","Data":"6f8888bf1ee02962e4439a34af9288e482c6fae78f9952b6e008d5a6c0076a2a"} Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.031806 5108 generic.go:358] "Generic (PLEG): container finished" podID="bbb51482-bfac-4350-9ec7-b9470cbf4b19" containerID="68343ea43a360ff5a70f3adeb96f106bec44c316e86a8ca46c2896574effb3a1" exitCode=0 Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.031892 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"bbb51482-bfac-4350-9ec7-b9470cbf4b19","Type":"ContainerDied","Data":"68343ea43a360ff5a70f3adeb96f106bec44c316e86a8ca46c2896574effb3a1"} Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.523390 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.820885 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.821299 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.827371 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.827491 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.828311 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.830470 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.830868 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.830942 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-qmngh\"" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.961500 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/52617309-d688-4e3c-8a64-1894511950bc-tls-assets\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.961561 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-web-config\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.961617 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-21d2b0c4-fe1e-4a44-858b-315478dbe555\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21d2b0c4-fe1e-4a44-858b-315478dbe555\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.961643 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.961680 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/52617309-d688-4e3c-8a64-1894511950bc-config-out\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.961704 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75x8\" (UniqueName: \"kubernetes.io/projected/52617309-d688-4e3c-8a64-1894511950bc-kube-api-access-z75x8\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.961726 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.961749 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:56 crc kubenswrapper[5108]: I0104 00:37:56.961785 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-config-volume\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.063158 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-web-config\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.063251 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-21d2b0c4-fe1e-4a44-858b-315478dbe555\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21d2b0c4-fe1e-4a44-858b-315478dbe555\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.063274 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.063342 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/52617309-d688-4e3c-8a64-1894511950bc-config-out\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.063375 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z75x8\" (UniqueName: \"kubernetes.io/projected/52617309-d688-4e3c-8a64-1894511950bc-kube-api-access-z75x8\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.063912 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.064135 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: E0104 00:37:57.064623 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.064795 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-config-volume\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: E0104 00:37:57.064861 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls podName:52617309-d688-4e3c-8a64-1894511950bc nodeName:}" failed. No retries permitted until 2026-01-04 00:37:57.564783298 +0000 UTC m=+1651.553348384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "52617309-d688-4e3c-8a64-1894511950bc") : secret "default-alertmanager-proxy-tls" not found Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.065562 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/52617309-d688-4e3c-8a64-1894511950bc-tls-assets\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.072811 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/52617309-d688-4e3c-8a64-1894511950bc-config-out\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.072834 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-config-volume\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.073426 5108 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.073466 5108 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-21d2b0c4-fe1e-4a44-858b-315478dbe555\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21d2b0c4-fe1e-4a44-858b-315478dbe555\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e086708240746b73d0e7b3d4dea084a2eade0a7edad1231142b316b509354314/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.076184 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-web-config\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.080580 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.081527 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.082072 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/52617309-d688-4e3c-8a64-1894511950bc-tls-assets\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.097646 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z75x8\" (UniqueName: \"kubernetes.io/projected/52617309-d688-4e3c-8a64-1894511950bc-kube-api-access-z75x8\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.112693 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-21d2b0c4-fe1e-4a44-858b-315478dbe555\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-21d2b0c4-fe1e-4a44-858b-315478dbe555\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: I0104 00:37:57.576823 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:57 crc kubenswrapper[5108]: E0104 00:37:57.578404 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 04 00:37:57 crc kubenswrapper[5108]: E0104 00:37:57.578552 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls podName:52617309-d688-4e3c-8a64-1894511950bc nodeName:}" failed. No retries permitted until 2026-01-04 00:37:58.578517941 +0000 UTC m=+1652.567083027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "52617309-d688-4e3c-8a64-1894511950bc") : secret "default-alertmanager-proxy-tls" not found Jan 04 00:37:58 crc kubenswrapper[5108]: I0104 00:37:58.593628 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:37:58 crc kubenswrapper[5108]: E0104 00:37:58.593915 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 04 00:37:58 crc kubenswrapper[5108]: E0104 00:37:58.594082 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls podName:52617309-d688-4e3c-8a64-1894511950bc nodeName:}" failed. No retries permitted until 2026-01-04 00:38:00.594017275 +0000 UTC m=+1654.582582371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "52617309-d688-4e3c-8a64-1894511950bc") : secret "default-alertmanager-proxy-tls" not found Jan 04 00:38:00 crc kubenswrapper[5108]: I0104 00:38:00.134781 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458118-dfc8d"] Jan 04 00:38:00 crc kubenswrapper[5108]: I0104 00:38:00.627479 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:38:00 crc kubenswrapper[5108]: E0104 00:38:00.627763 5108 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 04 00:38:00 crc kubenswrapper[5108]: E0104 00:38:00.627896 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls podName:52617309-d688-4e3c-8a64-1894511950bc nodeName:}" failed. No retries permitted until 2026-01-04 00:38:04.627869433 +0000 UTC m=+1658.616434519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "52617309-d688-4e3c-8a64-1894511950bc") : secret "default-alertmanager-proxy-tls" not found Jan 04 00:38:01 crc kubenswrapper[5108]: I0104 00:38:01.083676 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458118-dfc8d"] Jan 04 00:38:01 crc kubenswrapper[5108]: I0104 00:38:01.084126 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458118-dfc8d" Jan 04 00:38:01 crc kubenswrapper[5108]: I0104 00:38:01.087776 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:38:01 crc kubenswrapper[5108]: I0104 00:38:01.088431 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:38:01 crc kubenswrapper[5108]: I0104 00:38:01.088463 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:38:01 crc kubenswrapper[5108]: I0104 00:38:01.136284 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncx4h\" (UniqueName: \"kubernetes.io/projected/c5c0d7f7-1057-4fd8-ac9c-af9739624339-kube-api-access-ncx4h\") pod \"auto-csr-approver-29458118-dfc8d\" (UID: \"c5c0d7f7-1057-4fd8-ac9c-af9739624339\") " pod="openshift-infra/auto-csr-approver-29458118-dfc8d" Jan 04 00:38:01 crc kubenswrapper[5108]: I0104 00:38:01.238457 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ncx4h\" (UniqueName: \"kubernetes.io/projected/c5c0d7f7-1057-4fd8-ac9c-af9739624339-kube-api-access-ncx4h\") pod \"auto-csr-approver-29458118-dfc8d\" (UID: \"c5c0d7f7-1057-4fd8-ac9c-af9739624339\") " pod="openshift-infra/auto-csr-approver-29458118-dfc8d" Jan 04 00:38:01 crc kubenswrapper[5108]: I0104 00:38:01.261109 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncx4h\" (UniqueName: \"kubernetes.io/projected/c5c0d7f7-1057-4fd8-ac9c-af9739624339-kube-api-access-ncx4h\") pod \"auto-csr-approver-29458118-dfc8d\" (UID: \"c5c0d7f7-1057-4fd8-ac9c-af9739624339\") " pod="openshift-infra/auto-csr-approver-29458118-dfc8d" Jan 04 00:38:01 crc kubenswrapper[5108]: I0104 00:38:01.415479 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458118-dfc8d" Jan 04 00:38:04 crc kubenswrapper[5108]: I0104 00:38:04.703073 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:38:04 crc kubenswrapper[5108]: I0104 00:38:04.711142 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52617309-d688-4e3c-8a64-1894511950bc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"52617309-d688-4e3c-8a64-1894511950bc\") " pod="service-telemetry/alertmanager-default-0" Jan 04 00:38:04 crc kubenswrapper[5108]: I0104 00:38:04.949070 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 04 00:38:05 crc kubenswrapper[5108]: I0104 00:38:05.112587 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458118-dfc8d"] Jan 04 00:38:05 crc kubenswrapper[5108]: I0104 00:38:05.135076 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 04 00:38:05 crc kubenswrapper[5108]: I0104 00:38:05.409289 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458118-dfc8d" event={"ID":"c5c0d7f7-1057-4fd8-ac9c-af9739624339","Type":"ContainerStarted","Data":"cb3d2b73962763665ef7e7bf45772f3ff6710d83d7b72f76a4e433816907125b"} Jan 04 00:38:05 crc kubenswrapper[5108]: I0104 00:38:05.621285 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 04 00:38:05 crc kubenswrapper[5108]: W0104 00:38:05.643241 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52617309_d688_4e3c_8a64_1894511950bc.slice/crio-1cbd796f8c918c85e75dbb0372fcd1417d6f911d16609fd3d304c8c00975175d WatchSource:0}: Error finding container 1cbd796f8c918c85e75dbb0372fcd1417d6f911d16609fd3d304c8c00975175d: Status 404 returned error can't find the container with id 1cbd796f8c918c85e75dbb0372fcd1417d6f911d16609fd3d304c8c00975175d Jan 04 00:38:06 crc kubenswrapper[5108]: I0104 00:38:06.431425 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"52617309-d688-4e3c-8a64-1894511950bc","Type":"ContainerStarted","Data":"1cbd796f8c918c85e75dbb0372fcd1417d6f911d16609fd3d304c8c00975175d"} Jan 04 00:38:06 crc kubenswrapper[5108]: I0104 00:38:06.434774 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-hjv6t" event={"ID":"f61d3277-40d7-4ac1-994c-e64ce83b3fe9","Type":"ContainerStarted","Data":"623f9b63d310f73c7596ae29124b3531e41358243c1419a9d378971b46221d40"} Jan 04 00:38:06 crc kubenswrapper[5108]: I0104 00:38:06.462304 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-hjv6t" podStartSLOduration=2.675947132 podStartE2EDuration="14.462274107s" podCreationTimestamp="2026-01-04 00:37:52 +0000 UTC" firstStartedPulling="2026-01-04 00:37:53.522015934 +0000 UTC m=+1647.510581020" lastFinishedPulling="2026-01-04 00:38:05.308342899 +0000 UTC m=+1659.296907995" observedRunningTime="2026-01-04 00:38:06.452110598 +0000 UTC m=+1660.440675694" watchObservedRunningTime="2026-01-04 00:38:06.462274107 +0000 UTC m=+1660.450839213" Jan 04 00:38:08 crc kubenswrapper[5108]: I0104 00:38:08.460760 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458118-dfc8d" event={"ID":"c5c0d7f7-1057-4fd8-ac9c-af9739624339","Type":"ContainerStarted","Data":"981124768e1215576bab1ae7e2b3dc25840da8080e151bff2aa7ef69d38ac239"} Jan 04 00:38:09 crc kubenswrapper[5108]: I0104 00:38:09.534286 5108 generic.go:358] "Generic (PLEG): container finished" podID="c5c0d7f7-1057-4fd8-ac9c-af9739624339" containerID="981124768e1215576bab1ae7e2b3dc25840da8080e151bff2aa7ef69d38ac239" exitCode=0 Jan 04 00:38:09 crc kubenswrapper[5108]: I0104 00:38:09.534432 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458118-dfc8d" event={"ID":"c5c0d7f7-1057-4fd8-ac9c-af9739624339","Type":"ContainerDied","Data":"981124768e1215576bab1ae7e2b3dc25840da8080e151bff2aa7ef69d38ac239"} Jan 04 00:38:10 crc kubenswrapper[5108]: I0104 00:38:10.545837 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"52617309-d688-4e3c-8a64-1894511950bc","Type":"ContainerStarted","Data":"2eb419361cac06d811224968f43bb432fa48dca1326d0eb729dd7539892dac3c"} Jan 04 00:38:11 crc kubenswrapper[5108]: I0104 00:38:11.864684 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458118-dfc8d" Jan 04 00:38:11 crc kubenswrapper[5108]: I0104 00:38:11.996615 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncx4h\" (UniqueName: \"kubernetes.io/projected/c5c0d7f7-1057-4fd8-ac9c-af9739624339-kube-api-access-ncx4h\") pod \"c5c0d7f7-1057-4fd8-ac9c-af9739624339\" (UID: \"c5c0d7f7-1057-4fd8-ac9c-af9739624339\") " Jan 04 00:38:12 crc kubenswrapper[5108]: I0104 00:38:12.014426 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c0d7f7-1057-4fd8-ac9c-af9739624339-kube-api-access-ncx4h" (OuterVolumeSpecName: "kube-api-access-ncx4h") pod "c5c0d7f7-1057-4fd8-ac9c-af9739624339" (UID: "c5c0d7f7-1057-4fd8-ac9c-af9739624339"). InnerVolumeSpecName "kube-api-access-ncx4h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:38:12 crc kubenswrapper[5108]: I0104 00:38:12.098564 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ncx4h\" (UniqueName: \"kubernetes.io/projected/c5c0d7f7-1057-4fd8-ac9c-af9739624339-kube-api-access-ncx4h\") on node \"crc\" DevicePath \"\"" Jan 04 00:38:12 crc kubenswrapper[5108]: I0104 00:38:12.573987 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458118-dfc8d" event={"ID":"c5c0d7f7-1057-4fd8-ac9c-af9739624339","Type":"ContainerDied","Data":"cb3d2b73962763665ef7e7bf45772f3ff6710d83d7b72f76a4e433816907125b"} Jan 04 00:38:12 crc kubenswrapper[5108]: I0104 00:38:12.575583 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb3d2b73962763665ef7e7bf45772f3ff6710d83d7b72f76a4e433816907125b" Jan 04 00:38:12 crc kubenswrapper[5108]: I0104 00:38:12.574021 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458118-dfc8d" Jan 04 00:38:12 crc kubenswrapper[5108]: I0104 00:38:12.940957 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458112-lfqrb"] Jan 04 00:38:12 crc kubenswrapper[5108]: I0104 00:38:12.952411 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458112-lfqrb"] Jan 04 00:38:13 crc kubenswrapper[5108]: I0104 00:38:13.587656 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"bbb51482-bfac-4350-9ec7-b9470cbf4b19","Type":"ContainerStarted","Data":"0efebbff1e9b1d4fda2e0ce6c293079a2d4736f816808e23d7d5ca790064d6aa"} Jan 04 00:38:14 crc kubenswrapper[5108]: I0104 00:38:14.459628 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="076318d3-ef17-4b92-8c2f-1c9c9ce86c2d" path="/var/lib/kubelet/pods/076318d3-ef17-4b92-8c2f-1c9c9ce86c2d/volumes" Jan 04 00:38:14 crc kubenswrapper[5108]: I0104 00:38:14.994604 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl"] Jan 04 00:38:14 crc kubenswrapper[5108]: I0104 00:38:14.995735 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c5c0d7f7-1057-4fd8-ac9c-af9739624339" containerName="oc" Jan 04 00:38:14 crc kubenswrapper[5108]: I0104 00:38:14.995766 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c0d7f7-1057-4fd8-ac9c-af9739624339" containerName="oc" Jan 04 00:38:14 crc kubenswrapper[5108]: I0104 00:38:14.995914 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="c5c0d7f7-1057-4fd8-ac9c-af9739624339" containerName="oc" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.017288 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl"] Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.017517 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.021463 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-xhqjj\"" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.021476 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.021540 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.026438 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.044630 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.044750 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.044864 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.045070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47n9q\" (UniqueName: \"kubernetes.io/projected/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-kube-api-access-47n9q\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.045210 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.146638 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.146752 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.146796 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.146829 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.146877 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-47n9q\" (UniqueName: \"kubernetes.io/projected/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-kube-api-access-47n9q\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.148344 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: E0104 00:38:15.148539 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 04 00:38:15 crc kubenswrapper[5108]: E0104 00:38:15.148632 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls podName:9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04 nodeName:}" failed. No retries permitted until 2026-01-04 00:38:15.648606076 +0000 UTC m=+1669.637171162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" (UID: "9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.149222 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.165975 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.174367 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-47n9q\" (UniqueName: \"kubernetes.io/projected/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-kube-api-access-47n9q\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.609476 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"bbb51482-bfac-4350-9ec7-b9470cbf4b19","Type":"ContainerStarted","Data":"dbbf1516e7ca656a677f51f987f514ee8caa0459e4f7bbfff238ff2fdfb75c00"} Jan 04 00:38:15 crc kubenswrapper[5108]: I0104 00:38:15.657376 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:15 crc kubenswrapper[5108]: E0104 00:38:15.657724 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 04 00:38:15 crc kubenswrapper[5108]: E0104 00:38:15.657824 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls podName:9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04 nodeName:}" failed. No retries permitted until 2026-01-04 00:38:16.657798412 +0000 UTC m=+1670.646363518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" (UID: "9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 04 00:38:16 crc kubenswrapper[5108]: I0104 00:38:16.680457 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:16 crc kubenswrapper[5108]: E0104 00:38:16.680872 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 04 00:38:16 crc kubenswrapper[5108]: E0104 00:38:16.681064 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls podName:9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04 nodeName:}" failed. No retries permitted until 2026-01-04 00:38:18.6810227 +0000 UTC m=+1672.669587926 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" (UID: "9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.021091 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2"] Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.074353 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2"] Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.074590 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.079132 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.079558 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.208840 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.209307 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.209343 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.209380 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.209405 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58w5k\" (UniqueName: \"kubernetes.io/projected/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-kube-api-access-58w5k\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.311075 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.311161 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.311195 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.311275 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.311300 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58w5k\" (UniqueName: \"kubernetes.io/projected/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-kube-api-access-58w5k\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: E0104 00:38:18.312319 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 04 00:38:18 crc kubenswrapper[5108]: E0104 00:38:18.312425 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-default-cloud1-ceil-meter-proxy-tls podName:2d896738-b0d6-4d0a-81b6-3e24ac1ce92d nodeName:}" failed. No retries permitted until 2026-01-04 00:38:18.81240067 +0000 UTC m=+1672.800965756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" (UID: "2d896738-b0d6-4d0a-81b6-3e24ac1ce92d") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.312920 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.313219 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.320076 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.340985 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58w5k\" (UniqueName: \"kubernetes.io/projected/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-kube-api-access-58w5k\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.651111 5108 generic.go:358] "Generic (PLEG): container finished" podID="52617309-d688-4e3c-8a64-1894511950bc" containerID="2eb419361cac06d811224968f43bb432fa48dca1326d0eb729dd7539892dac3c" exitCode=0 Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.651147 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"52617309-d688-4e3c-8a64-1894511950bc","Type":"ContainerDied","Data":"2eb419361cac06d811224968f43bb432fa48dca1326d0eb729dd7539892dac3c"} Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.718123 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.726433 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl\" (UID: \"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.822137 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:18 crc kubenswrapper[5108]: E0104 00:38:18.822544 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 04 00:38:18 crc kubenswrapper[5108]: E0104 00:38:18.822652 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-default-cloud1-ceil-meter-proxy-tls podName:2d896738-b0d6-4d0a-81b6-3e24ac1ce92d nodeName:}" failed. No retries permitted until 2026-01-04 00:38:19.822622426 +0000 UTC m=+1673.811187512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" (UID: "2d896738-b0d6-4d0a-81b6-3e24ac1ce92d") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 04 00:38:18 crc kubenswrapper[5108]: I0104 00:38:18.941113 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" Jan 04 00:38:19 crc kubenswrapper[5108]: I0104 00:38:19.840556 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:19 crc kubenswrapper[5108]: I0104 00:38:19.849976 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d896738-b0d6-4d0a-81b6-3e24ac1ce92d-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2\" (UID: \"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:19 crc kubenswrapper[5108]: I0104 00:38:19.933954 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.505957 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz"] Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.534086 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz"] Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.534339 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.540836 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.545119 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.620997 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fe634a26-6a59-4ba4-b860-9fb7908015ed-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.621070 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.621376 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.621421 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fe634a26-6a59-4ba4-b860-9fb7908015ed-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.621854 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qth89\" (UniqueName: \"kubernetes.io/projected/fe634a26-6a59-4ba4-b860-9fb7908015ed-kube-api-access-qth89\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.723743 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qth89\" (UniqueName: \"kubernetes.io/projected/fe634a26-6a59-4ba4-b860-9fb7908015ed-kube-api-access-qth89\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.723940 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fe634a26-6a59-4ba4-b860-9fb7908015ed-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.723990 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.724025 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.724060 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fe634a26-6a59-4ba4-b860-9fb7908015ed-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: E0104 00:38:22.724649 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 04 00:38:22 crc kubenswrapper[5108]: E0104 00:38:22.724777 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-default-cloud1-sens-meter-proxy-tls podName:fe634a26-6a59-4ba4-b860-9fb7908015ed nodeName:}" failed. No retries permitted until 2026-01-04 00:38:23.22473135 +0000 UTC m=+1677.213296436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" (UID: "fe634a26-6a59-4ba4-b860-9fb7908015ed") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.725648 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fe634a26-6a59-4ba4-b860-9fb7908015ed-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.725683 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fe634a26-6a59-4ba4-b860-9fb7908015ed-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.734065 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:22 crc kubenswrapper[5108]: I0104 00:38:22.752132 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qth89\" (UniqueName: \"kubernetes.io/projected/fe634a26-6a59-4ba4-b860-9fb7908015ed-kube-api-access-qth89\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:23 crc kubenswrapper[5108]: I0104 00:38:23.233328 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:23 crc kubenswrapper[5108]: E0104 00:38:23.233588 5108 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 04 00:38:23 crc kubenswrapper[5108]: E0104 00:38:23.233743 5108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-default-cloud1-sens-meter-proxy-tls podName:fe634a26-6a59-4ba4-b860-9fb7908015ed nodeName:}" failed. No retries permitted until 2026-01-04 00:38:24.233712101 +0000 UTC m=+1678.222277187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" (UID: "fe634a26-6a59-4ba4-b860-9fb7908015ed") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 04 00:38:23 crc kubenswrapper[5108]: I0104 00:38:23.899999 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2"] Jan 04 00:38:24 crc kubenswrapper[5108]: I0104 00:38:24.165343 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl"] Jan 04 00:38:24 crc kubenswrapper[5108]: I0104 00:38:24.250495 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:24 crc kubenswrapper[5108]: I0104 00:38:24.273743 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe634a26-6a59-4ba4-b860-9fb7908015ed-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz\" (UID: \"fe634a26-6a59-4ba4-b860-9fb7908015ed\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:24 crc kubenswrapper[5108]: I0104 00:38:24.362986 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" Jan 04 00:38:24 crc kubenswrapper[5108]: I0104 00:38:24.743346 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"bbb51482-bfac-4350-9ec7-b9470cbf4b19","Type":"ContainerStarted","Data":"af9f6869332a6b27a8bb93f2117711399c0115fca879dabb79519122003a6adb"} Jan 04 00:38:24 crc kubenswrapper[5108]: I0104 00:38:24.745437 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" event={"ID":"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04","Type":"ContainerStarted","Data":"d874c8173dc746f259973c3f229ed24ac7d7d94f4d8f438df165492f9d1c5979"} Jan 04 00:38:24 crc kubenswrapper[5108]: I0104 00:38:24.749508 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" event={"ID":"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d","Type":"ContainerStarted","Data":"df3c6e22b0cde8888ea33cd1cf99bd2815650fa70f2281aaf9897b46dde58756"} Jan 04 00:38:24 crc kubenswrapper[5108]: I0104 00:38:24.781493 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.955772414 podStartE2EDuration="44.781469788s" podCreationTimestamp="2026-01-04 00:37:40 +0000 UTC" firstStartedPulling="2026-01-04 00:37:43.801714132 +0000 UTC m=+1637.790279218" lastFinishedPulling="2026-01-04 00:38:23.627411506 +0000 UTC m=+1677.615976592" observedRunningTime="2026-01-04 00:38:24.771709609 +0000 UTC m=+1678.760274695" watchObservedRunningTime="2026-01-04 00:38:24.781469788 +0000 UTC m=+1678.770034874" Jan 04 00:38:24 crc kubenswrapper[5108]: I0104 00:38:24.834943 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz"] Jan 04 00:38:25 crc kubenswrapper[5108]: I0104 00:38:25.762295 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" event={"ID":"fe634a26-6a59-4ba4-b860-9fb7908015ed","Type":"ContainerStarted","Data":"5a0fb1e9157ac32dd0c0ce1d1a7587fc0615598aa6e23387f6d39c39a7a8a3e2"} Jan 04 00:38:26 crc kubenswrapper[5108]: I0104 00:38:26.791176 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" event={"ID":"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04","Type":"ContainerStarted","Data":"273f8fca86fe84fd6beda65d597ea1a44ef8a0607a4f254eb0cbb7619fcc58b2"} Jan 04 00:38:26 crc kubenswrapper[5108]: I0104 00:38:26.793606 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" event={"ID":"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d","Type":"ContainerStarted","Data":"e1cd4d86826a9cf6008b0c94498a098259f886aa148b6ce0de438d0a6ca6858b"} Jan 04 00:38:26 crc kubenswrapper[5108]: I0104 00:38:26.800136 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"52617309-d688-4e3c-8a64-1894511950bc","Type":"ContainerStarted","Data":"750ab8e32e3297a20576ac981ecc8b8de0d8ab8d0e489fd2de0b1df9110639a5"} Jan 04 00:38:26 crc kubenswrapper[5108]: I0104 00:38:26.808083 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" event={"ID":"fe634a26-6a59-4ba4-b860-9fb7908015ed","Type":"ContainerStarted","Data":"bbb48276dacaa365502f12ee92ac1e46df7cb42fd9c671542694b86433154ed2"} Jan 04 00:38:28 crc kubenswrapper[5108]: I0104 00:38:28.550997 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Jan 04 00:38:28 crc kubenswrapper[5108]: I0104 00:38:28.551602 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 04 00:38:28 crc kubenswrapper[5108]: I0104 00:38:28.642654 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 04 00:38:28 crc kubenswrapper[5108]: I0104 00:38:28.873892 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 04 00:38:29 crc kubenswrapper[5108]: I0104 00:38:29.845072 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"52617309-d688-4e3c-8a64-1894511950bc","Type":"ContainerStarted","Data":"3afe065cf02a1b8924db83cf25b70266aaf6e3aa5f680e73dce3eebc8421542c"} Jan 04 00:38:30 crc kubenswrapper[5108]: I0104 00:38:30.781850 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r"] Jan 04 00:38:30 crc kubenswrapper[5108]: I0104 00:38:30.817379 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r"] Jan 04 00:38:30 crc kubenswrapper[5108]: I0104 00:38:30.817655 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:30 crc kubenswrapper[5108]: I0104 00:38:30.824843 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Jan 04 00:38:30 crc kubenswrapper[5108]: I0104 00:38:30.825183 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Jan 04 00:38:30 crc kubenswrapper[5108]: I0104 00:38:30.978147 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b08445a1-a583-42f6-b86f-4eb1f0e941d1-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:30 crc kubenswrapper[5108]: I0104 00:38:30.978232 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56r4k\" (UniqueName: \"kubernetes.io/projected/b08445a1-a583-42f6-b86f-4eb1f0e941d1-kube-api-access-56r4k\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:30 crc kubenswrapper[5108]: I0104 00:38:30.978422 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b08445a1-a583-42f6-b86f-4eb1f0e941d1-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:30 crc kubenswrapper[5108]: I0104 00:38:30.978620 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b08445a1-a583-42f6-b86f-4eb1f0e941d1-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.080906 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b08445a1-a583-42f6-b86f-4eb1f0e941d1-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.080984 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-56r4k\" (UniqueName: \"kubernetes.io/projected/b08445a1-a583-42f6-b86f-4eb1f0e941d1-kube-api-access-56r4k\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.081027 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b08445a1-a583-42f6-b86f-4eb1f0e941d1-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.081064 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b08445a1-a583-42f6-b86f-4eb1f0e941d1-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.081656 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b08445a1-a583-42f6-b86f-4eb1f0e941d1-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.083761 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b08445a1-a583-42f6-b86f-4eb1f0e941d1-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.105330 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b08445a1-a583-42f6-b86f-4eb1f0e941d1-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.111576 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-56r4k\" (UniqueName: \"kubernetes.io/projected/b08445a1-a583-42f6-b86f-4eb1f0e941d1-kube-api-access-56r4k\") pod \"default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r\" (UID: \"b08445a1-a583-42f6-b86f-4eb1f0e941d1\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.166334 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" Jan 04 00:38:31 crc kubenswrapper[5108]: I0104 00:38:31.190901 5108 scope.go:117] "RemoveContainer" containerID="5fbb4cd4295b47cf480ed517fcd2bb0882857df4fe79ab6028bc31da8dd9d724" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.041433 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v"] Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.068155 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v"] Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.068370 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.074401 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.201742 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.201824 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.201898 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.201993 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbrj\" (UniqueName: \"kubernetes.io/projected/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-kube-api-access-qtbrj\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.303475 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbrj\" (UniqueName: \"kubernetes.io/projected/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-kube-api-access-qtbrj\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.303593 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.303631 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.303706 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.304947 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.305284 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.314398 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.328036 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbrj\" (UniqueName: \"kubernetes.io/projected/67fc1329-a5f0-454d-8fc9-d9e5d6410e13-kube-api-access-qtbrj\") pod \"default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v\" (UID: \"67fc1329-a5f0-454d-8fc9-d9e5d6410e13\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:32 crc kubenswrapper[5108]: I0104 00:38:32.395664 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" Jan 04 00:38:33 crc kubenswrapper[5108]: I0104 00:38:33.789701 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v"] Jan 04 00:38:33 crc kubenswrapper[5108]: I0104 00:38:33.910743 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" event={"ID":"67fc1329-a5f0-454d-8fc9-d9e5d6410e13","Type":"ContainerStarted","Data":"179ee2ed28a0aac6a00fbcb1040d7e2ca246ec9c311e38e3b655233b0a9ccad9"} Jan 04 00:38:34 crc kubenswrapper[5108]: I0104 00:38:34.066860 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r"] Jan 04 00:38:34 crc kubenswrapper[5108]: W0104 00:38:34.075857 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb08445a1_a583_42f6_b86f_4eb1f0e941d1.slice/crio-783b64c69ba713457434d1c113973dc688d63f624e31e038f3edd0ebe4476dc6 WatchSource:0}: Error finding container 783b64c69ba713457434d1c113973dc688d63f624e31e038f3edd0ebe4476dc6: Status 404 returned error can't find the container with id 783b64c69ba713457434d1c113973dc688d63f624e31e038f3edd0ebe4476dc6 Jan 04 00:38:34 crc kubenswrapper[5108]: I0104 00:38:34.923235 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" event={"ID":"b08445a1-a583-42f6-b86f-4eb1f0e941d1","Type":"ContainerStarted","Data":"38e1f2876df9a6300ab3c9e2f13d39491f3f1d3508a45a314010f807644196e1"} Jan 04 00:38:34 crc kubenswrapper[5108]: I0104 00:38:34.923844 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" event={"ID":"b08445a1-a583-42f6-b86f-4eb1f0e941d1","Type":"ContainerStarted","Data":"783b64c69ba713457434d1c113973dc688d63f624e31e038f3edd0ebe4476dc6"} Jan 04 00:38:34 crc kubenswrapper[5108]: I0104 00:38:34.925183 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" event={"ID":"fe634a26-6a59-4ba4-b860-9fb7908015ed","Type":"ContainerStarted","Data":"d7d72a17febddc6734c1cf2f9375c298d6306de4542fa2657bfab09bf66bca3f"} Jan 04 00:38:34 crc kubenswrapper[5108]: I0104 00:38:34.928118 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" event={"ID":"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04","Type":"ContainerStarted","Data":"635ec4300052024794656f02221657197e3fb1c2d9740f7d0ca769f638224bcc"} Jan 04 00:38:34 crc kubenswrapper[5108]: I0104 00:38:34.931036 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" event={"ID":"67fc1329-a5f0-454d-8fc9-d9e5d6410e13","Type":"ContainerStarted","Data":"2e22ac1a114955125b5730ccbe8dbf9acf97f2c95bf658b0babb9469cc0b54fc"} Jan 04 00:38:34 crc kubenswrapper[5108]: I0104 00:38:34.936931 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" event={"ID":"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d","Type":"ContainerStarted","Data":"d5ee43f0e4360d0827b21c793e5b97dedc4f9a399353070f00ea11c407f158c2"} Jan 04 00:38:34 crc kubenswrapper[5108]: I0104 00:38:34.946573 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"52617309-d688-4e3c-8a64-1894511950bc","Type":"ContainerStarted","Data":"162f21816339c0de38ae75b9eb67ecdda2f38f7ffb8d4dea73878f145c60d091"} Jan 04 00:38:34 crc kubenswrapper[5108]: I0104 00:38:34.981982 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=25.08895245 podStartE2EDuration="39.981941417s" podCreationTimestamp="2026-01-04 00:37:55 +0000 UTC" firstStartedPulling="2026-01-04 00:38:18.65255037 +0000 UTC m=+1672.641115456" lastFinishedPulling="2026-01-04 00:38:33.545539337 +0000 UTC m=+1687.534104423" observedRunningTime="2026-01-04 00:38:34.979080028 +0000 UTC m=+1688.967645104" watchObservedRunningTime="2026-01-04 00:38:34.981941417 +0000 UTC m=+1688.970506503" Jan 04 00:38:39 crc kubenswrapper[5108]: I0104 00:38:39.032948 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" event={"ID":"b08445a1-a583-42f6-b86f-4eb1f0e941d1","Type":"ContainerStarted","Data":"ed68c398af83b530d4a0ab0834bffbd86cbd84bf71bff9b6f9b98a694a85b9a8"} Jan 04 00:38:39 crc kubenswrapper[5108]: I0104 00:38:39.043769 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" event={"ID":"fe634a26-6a59-4ba4-b860-9fb7908015ed","Type":"ContainerStarted","Data":"7ce5c5e25164057533501763206d701684a9f2adeb30bd7cdb3ae4a45abf3084"} Jan 04 00:38:39 crc kubenswrapper[5108]: I0104 00:38:39.060461 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" podStartSLOduration=4.650888438 podStartE2EDuration="9.06041593s" podCreationTimestamp="2026-01-04 00:38:30 +0000 UTC" firstStartedPulling="2026-01-04 00:38:34.08104958 +0000 UTC m=+1688.069614666" lastFinishedPulling="2026-01-04 00:38:38.490577082 +0000 UTC m=+1692.479142158" observedRunningTime="2026-01-04 00:38:39.057253383 +0000 UTC m=+1693.045818479" watchObservedRunningTime="2026-01-04 00:38:39.06041593 +0000 UTC m=+1693.048981016" Jan 04 00:38:40 crc kubenswrapper[5108]: I0104 00:38:40.101257 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" event={"ID":"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04","Type":"ContainerStarted","Data":"500b1a0ecfef1a117a7021c3fd7c800e632c226ccd01ed2ed4ba35aa6120ce38"} Jan 04 00:38:40 crc kubenswrapper[5108]: I0104 00:38:40.108004 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" event={"ID":"67fc1329-a5f0-454d-8fc9-d9e5d6410e13","Type":"ContainerStarted","Data":"d6df56d1cc99b08097fb1174fce9fa2f6620d3d1c4e384a89386c054b1928061"} Jan 04 00:38:40 crc kubenswrapper[5108]: I0104 00:38:40.110990 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" event={"ID":"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d","Type":"ContainerStarted","Data":"0ebaec17059f675040c20f01e6cc2e5e6aa70603b975845f9ff5cde878263692"} Jan 04 00:38:40 crc kubenswrapper[5108]: I0104 00:38:40.146032 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" podStartSLOduration=5.119773592 podStartE2EDuration="18.146003235s" podCreationTimestamp="2026-01-04 00:38:22 +0000 UTC" firstStartedPulling="2026-01-04 00:38:25.527058217 +0000 UTC m=+1679.515623303" lastFinishedPulling="2026-01-04 00:38:38.55328786 +0000 UTC m=+1692.541852946" observedRunningTime="2026-01-04 00:38:40.137037699 +0000 UTC m=+1694.125602795" watchObservedRunningTime="2026-01-04 00:38:40.146003235 +0000 UTC m=+1694.134568321" Jan 04 00:38:42 crc kubenswrapper[5108]: I0104 00:38:42.155461 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" podStartSLOduration=9.518407904 podStartE2EDuration="24.155277756s" podCreationTimestamp="2026-01-04 00:38:18 +0000 UTC" firstStartedPulling="2026-01-04 00:38:23.960461011 +0000 UTC m=+1677.949026097" lastFinishedPulling="2026-01-04 00:38:38.597330873 +0000 UTC m=+1692.585895949" observedRunningTime="2026-01-04 00:38:42.152243982 +0000 UTC m=+1696.140809108" watchObservedRunningTime="2026-01-04 00:38:42.155277756 +0000 UTC m=+1696.143842842" Jan 04 00:38:43 crc kubenswrapper[5108]: I0104 00:38:43.164682 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" podStartSLOduration=6.445904181 podStartE2EDuration="11.164648882s" podCreationTimestamp="2026-01-04 00:38:32 +0000 UTC" firstStartedPulling="2026-01-04 00:38:33.834811656 +0000 UTC m=+1687.823376742" lastFinishedPulling="2026-01-04 00:38:38.553556357 +0000 UTC m=+1692.542121443" observedRunningTime="2026-01-04 00:38:43.156387844 +0000 UTC m=+1697.144952930" watchObservedRunningTime="2026-01-04 00:38:43.164648882 +0000 UTC m=+1697.153213968" Jan 04 00:38:51 crc kubenswrapper[5108]: I0104 00:38:51.276936 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" podStartSLOduration=22.908602873 podStartE2EDuration="37.276910957s" podCreationTimestamp="2026-01-04 00:38:14 +0000 UTC" firstStartedPulling="2026-01-04 00:38:24.16950478 +0000 UTC m=+1678.158069876" lastFinishedPulling="2026-01-04 00:38:38.537812874 +0000 UTC m=+1692.526377960" observedRunningTime="2026-01-04 00:38:47.22441608 +0000 UTC m=+1701.212981176" watchObservedRunningTime="2026-01-04 00:38:51.276910957 +0000 UTC m=+1705.265476043" Jan 04 00:38:51 crc kubenswrapper[5108]: I0104 00:38:51.282296 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8blx7"] Jan 04 00:38:51 crc kubenswrapper[5108]: I0104 00:38:51.282638 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" podUID="dc18b015-2dc5-4ecf-a373-a9a04b7ab311" containerName="default-interconnect" containerID="cri-o://994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085" gracePeriod=30 Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.208327 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.254462 5108 generic.go:358] "Generic (PLEG): container finished" podID="b08445a1-a583-42f6-b86f-4eb1f0e941d1" containerID="38e1f2876df9a6300ab3c9e2f13d39491f3f1d3508a45a314010f807644196e1" exitCode=0 Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.254468 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" event={"ID":"b08445a1-a583-42f6-b86f-4eb1f0e941d1","Type":"ContainerDied","Data":"38e1f2876df9a6300ab3c9e2f13d39491f3f1d3508a45a314010f807644196e1"} Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.255217 5108 scope.go:117] "RemoveContainer" containerID="38e1f2876df9a6300ab3c9e2f13d39491f3f1d3508a45a314010f807644196e1" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.268001 5108 generic.go:358] "Generic (PLEG): container finished" podID="fe634a26-6a59-4ba4-b860-9fb7908015ed" containerID="d7d72a17febddc6734c1cf2f9375c298d6306de4542fa2657bfab09bf66bca3f" exitCode=0 Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.268075 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" event={"ID":"fe634a26-6a59-4ba4-b860-9fb7908015ed","Type":"ContainerDied","Data":"d7d72a17febddc6734c1cf2f9375c298d6306de4542fa2657bfab09bf66bca3f"} Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.268639 5108 scope.go:117] "RemoveContainer" containerID="d7d72a17febddc6734c1cf2f9375c298d6306de4542fa2657bfab09bf66bca3f" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.272477 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-6lxlx"] Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.273441 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc18b015-2dc5-4ecf-a373-a9a04b7ab311" containerName="default-interconnect" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.273454 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc18b015-2dc5-4ecf-a373-a9a04b7ab311" containerName="default-interconnect" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.273626 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="dc18b015-2dc5-4ecf-a373-a9a04b7ab311" containerName="default-interconnect" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.279492 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.279525 5108 generic.go:358] "Generic (PLEG): container finished" podID="9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04" containerID="635ec4300052024794656f02221657197e3fb1c2d9740f7d0ca769f638224bcc" exitCode=0 Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.279619 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" event={"ID":"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04","Type":"ContainerDied","Data":"635ec4300052024794656f02221657197e3fb1c2d9740f7d0ca769f638224bcc"} Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.280093 5108 scope.go:117] "RemoveContainer" containerID="635ec4300052024794656f02221657197e3fb1c2d9740f7d0ca769f638224bcc" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.298121 5108 generic.go:358] "Generic (PLEG): container finished" podID="67fc1329-a5f0-454d-8fc9-d9e5d6410e13" containerID="2e22ac1a114955125b5730ccbe8dbf9acf97f2c95bf658b0babb9469cc0b54fc" exitCode=0 Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.298260 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" event={"ID":"67fc1329-a5f0-454d-8fc9-d9e5d6410e13","Type":"ContainerDied","Data":"2e22ac1a114955125b5730ccbe8dbf9acf97f2c95bf658b0babb9469cc0b54fc"} Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.299799 5108 scope.go:117] "RemoveContainer" containerID="2e22ac1a114955125b5730ccbe8dbf9acf97f2c95bf658b0babb9469cc0b54fc" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.306754 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-ca\") pod \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.306869 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-config\") pod \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.306906 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-ca\") pod \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.306939 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhl5m\" (UniqueName: \"kubernetes.io/projected/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-kube-api-access-dhl5m\") pod \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.306999 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-users\") pod \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.307109 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-credentials\") pod \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.307189 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-credentials\") pod \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\" (UID: \"dc18b015-2dc5-4ecf-a373-a9a04b7ab311\") " Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.316008 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-6lxlx"] Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.319839 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-kube-api-access-dhl5m" (OuterVolumeSpecName: "kube-api-access-dhl5m") pod "dc18b015-2dc5-4ecf-a373-a9a04b7ab311" (UID: "dc18b015-2dc5-4ecf-a373-a9a04b7ab311"). InnerVolumeSpecName "kube-api-access-dhl5m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.322140 5108 generic.go:358] "Generic (PLEG): container finished" podID="2d896738-b0d6-4d0a-81b6-3e24ac1ce92d" containerID="d5ee43f0e4360d0827b21c793e5b97dedc4f9a399353070f00ea11c407f158c2" exitCode=0 Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.322527 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" event={"ID":"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d","Type":"ContainerDied","Data":"d5ee43f0e4360d0827b21c793e5b97dedc4f9a399353070f00ea11c407f158c2"} Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.323051 5108 scope.go:117] "RemoveContainer" containerID="d5ee43f0e4360d0827b21c793e5b97dedc4f9a399353070f00ea11c407f158c2" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.326931 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "dc18b015-2dc5-4ecf-a373-a9a04b7ab311" (UID: "dc18b015-2dc5-4ecf-a373-a9a04b7ab311"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.343499 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "dc18b015-2dc5-4ecf-a373-a9a04b7ab311" (UID: "dc18b015-2dc5-4ecf-a373-a9a04b7ab311"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.350338 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "dc18b015-2dc5-4ecf-a373-a9a04b7ab311" (UID: "dc18b015-2dc5-4ecf-a373-a9a04b7ab311"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.361057 5108 generic.go:358] "Generic (PLEG): container finished" podID="dc18b015-2dc5-4ecf-a373-a9a04b7ab311" containerID="994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085" exitCode=0 Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.361668 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" event={"ID":"dc18b015-2dc5-4ecf-a373-a9a04b7ab311","Type":"ContainerDied","Data":"994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085"} Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.361710 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" event={"ID":"dc18b015-2dc5-4ecf-a373-a9a04b7ab311","Type":"ContainerDied","Data":"5be9c70d015635f1ebea5f85084101d4b24127c84423cf7631c351f4bcba3bbb"} Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.361732 5108 scope.go:117] "RemoveContainer" containerID="994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.362149 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-8blx7" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.369924 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "dc18b015-2dc5-4ecf-a373-a9a04b7ab311" (UID: "dc18b015-2dc5-4ecf-a373-a9a04b7ab311"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.378430 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "dc18b015-2dc5-4ecf-a373-a9a04b7ab311" (UID: "dc18b015-2dc5-4ecf-a373-a9a04b7ab311"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.384574 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "dc18b015-2dc5-4ecf-a373-a9a04b7ab311" (UID: "dc18b015-2dc5-4ecf-a373-a9a04b7ab311"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.408624 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.408676 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.408725 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.408746 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-sasl-users\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.408884 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhvwk\" (UniqueName: \"kubernetes.io/projected/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-kube-api-access-nhvwk\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.409100 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.409134 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-sasl-config\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.409349 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.409524 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.409545 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.409556 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.409567 5108 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.409632 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dhl5m\" (UniqueName: \"kubernetes.io/projected/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-kube-api-access-dhl5m\") on node \"crc\" DevicePath \"\"" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.409642 5108 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/dc18b015-2dc5-4ecf-a373-a9a04b7ab311-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.486279 5108 scope.go:117] "RemoveContainer" containerID="994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085" Jan 04 00:38:52 crc kubenswrapper[5108]: E0104 00:38:52.493322 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085\": container with ID starting with 994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085 not found: ID does not exist" containerID="994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.493398 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085"} err="failed to get container status \"994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085\": rpc error: code = NotFound desc = could not find container \"994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085\": container with ID starting with 994133e9d7f332264051a3a382db80b98799418a27ffa921efb066837e7cc085 not found: ID does not exist" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.511287 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.511343 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-sasl-config\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.511399 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.511425 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.511462 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.511484 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-sasl-users\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.511506 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhvwk\" (UniqueName: \"kubernetes.io/projected/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-kube-api-access-nhvwk\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.513758 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-sasl-config\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.520449 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.520453 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.522171 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.525016 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-sasl-users\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.528606 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.531563 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhvwk\" (UniqueName: \"kubernetes.io/projected/f8ca2ca4-0523-4d2d-bec7-2c12ed1be637-kube-api-access-nhvwk\") pod \"default-interconnect-55bf8d5cb-6lxlx\" (UID: \"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637\") " pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.670937 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.718093 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8blx7"] Jan 04 00:38:52 crc kubenswrapper[5108]: I0104 00:38:52.728993 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-8blx7"] Jan 04 00:38:53 crc kubenswrapper[5108]: I0104 00:38:53.129997 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-6lxlx"] Jan 04 00:38:53 crc kubenswrapper[5108]: I0104 00:38:53.375186 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" event={"ID":"b08445a1-a583-42f6-b86f-4eb1f0e941d1","Type":"ContainerStarted","Data":"0913ecbe13315a70403c955cee206aa1c751d0a762b8fba78281ddc19e244873"} Jan 04 00:38:53 crc kubenswrapper[5108]: I0104 00:38:53.378690 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" event={"ID":"fe634a26-6a59-4ba4-b860-9fb7908015ed","Type":"ContainerStarted","Data":"9080054861c38c5ccd15f9f7a49c01fd74439ae420ba71b84e293573a01611d3"} Jan 04 00:38:53 crc kubenswrapper[5108]: I0104 00:38:53.381509 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" event={"ID":"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04","Type":"ContainerStarted","Data":"0bd9b974b08ccebf28ac59ddf70e8ee8ca01ff277f180aeee5ba59ee48d2a1c6"} Jan 04 00:38:53 crc kubenswrapper[5108]: I0104 00:38:53.384448 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" event={"ID":"67fc1329-a5f0-454d-8fc9-d9e5d6410e13","Type":"ContainerStarted","Data":"ea609c94edd9178112d390f0e27aeef7834669527a63a16481f1dca2b71804b7"} Jan 04 00:38:53 crc kubenswrapper[5108]: I0104 00:38:53.386794 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" event={"ID":"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637","Type":"ContainerStarted","Data":"cca1af2e1fefcee7baea258ba810851675820dae945c9ba27ef4de1c50b5bd1e"} Jan 04 00:38:53 crc kubenswrapper[5108]: I0104 00:38:53.386851 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" event={"ID":"f8ca2ca4-0523-4d2d-bec7-2c12ed1be637","Type":"ContainerStarted","Data":"c63e61c3f42176e396397f3e1cd606d664ceb001e7d993ca4d2312af5e4a198e"} Jan 04 00:38:53 crc kubenswrapper[5108]: I0104 00:38:53.391503 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" event={"ID":"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d","Type":"ContainerStarted","Data":"a6a80e62422fd84112787a708bdbfdc47db57d541b055dbac33ea80b0c052609"} Jan 04 00:38:53 crc kubenswrapper[5108]: I0104 00:38:53.521347 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-6lxlx" podStartSLOduration=2.521311394 podStartE2EDuration="2.521311394s" podCreationTimestamp="2026-01-04 00:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-04 00:38:53.513425887 +0000 UTC m=+1707.501990983" watchObservedRunningTime="2026-01-04 00:38:53.521311394 +0000 UTC m=+1707.509876500" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.405925 5108 generic.go:358] "Generic (PLEG): container finished" podID="b08445a1-a583-42f6-b86f-4eb1f0e941d1" containerID="0913ecbe13315a70403c955cee206aa1c751d0a762b8fba78281ddc19e244873" exitCode=0 Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.406043 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" event={"ID":"b08445a1-a583-42f6-b86f-4eb1f0e941d1","Type":"ContainerDied","Data":"0913ecbe13315a70403c955cee206aa1c751d0a762b8fba78281ddc19e244873"} Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.406640 5108 scope.go:117] "RemoveContainer" containerID="38e1f2876df9a6300ab3c9e2f13d39491f3f1d3508a45a314010f807644196e1" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.407189 5108 scope.go:117] "RemoveContainer" containerID="0913ecbe13315a70403c955cee206aa1c751d0a762b8fba78281ddc19e244873" Jan 04 00:38:54 crc kubenswrapper[5108]: E0104 00:38:54.407566 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r_service-telemetry(b08445a1-a583-42f6-b86f-4eb1f0e941d1)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" podUID="b08445a1-a583-42f6-b86f-4eb1f0e941d1" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.412327 5108 generic.go:358] "Generic (PLEG): container finished" podID="fe634a26-6a59-4ba4-b860-9fb7908015ed" containerID="9080054861c38c5ccd15f9f7a49c01fd74439ae420ba71b84e293573a01611d3" exitCode=0 Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.412440 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" event={"ID":"fe634a26-6a59-4ba4-b860-9fb7908015ed","Type":"ContainerDied","Data":"9080054861c38c5ccd15f9f7a49c01fd74439ae420ba71b84e293573a01611d3"} Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.413595 5108 scope.go:117] "RemoveContainer" containerID="9080054861c38c5ccd15f9f7a49c01fd74439ae420ba71b84e293573a01611d3" Jan 04 00:38:54 crc kubenswrapper[5108]: E0104 00:38:54.414009 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz_service-telemetry(fe634a26-6a59-4ba4-b860-9fb7908015ed)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" podUID="fe634a26-6a59-4ba4-b860-9fb7908015ed" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.415803 5108 generic.go:358] "Generic (PLEG): container finished" podID="9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04" containerID="0bd9b974b08ccebf28ac59ddf70e8ee8ca01ff277f180aeee5ba59ee48d2a1c6" exitCode=0 Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.415850 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" event={"ID":"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04","Type":"ContainerDied","Data":"0bd9b974b08ccebf28ac59ddf70e8ee8ca01ff277f180aeee5ba59ee48d2a1c6"} Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.416890 5108 scope.go:117] "RemoveContainer" containerID="0bd9b974b08ccebf28ac59ddf70e8ee8ca01ff277f180aeee5ba59ee48d2a1c6" Jan 04 00:38:54 crc kubenswrapper[5108]: E0104 00:38:54.417383 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl_service-telemetry(9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" podUID="9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.420523 5108 generic.go:358] "Generic (PLEG): container finished" podID="67fc1329-a5f0-454d-8fc9-d9e5d6410e13" containerID="ea609c94edd9178112d390f0e27aeef7834669527a63a16481f1dca2b71804b7" exitCode=0 Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.420896 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" event={"ID":"67fc1329-a5f0-454d-8fc9-d9e5d6410e13","Type":"ContainerDied","Data":"ea609c94edd9178112d390f0e27aeef7834669527a63a16481f1dca2b71804b7"} Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.421638 5108 scope.go:117] "RemoveContainer" containerID="ea609c94edd9178112d390f0e27aeef7834669527a63a16481f1dca2b71804b7" Jan 04 00:38:54 crc kubenswrapper[5108]: E0104 00:38:54.422012 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v_service-telemetry(67fc1329-a5f0-454d-8fc9-d9e5d6410e13)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" podUID="67fc1329-a5f0-454d-8fc9-d9e5d6410e13" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.425349 5108 generic.go:358] "Generic (PLEG): container finished" podID="2d896738-b0d6-4d0a-81b6-3e24ac1ce92d" containerID="a6a80e62422fd84112787a708bdbfdc47db57d541b055dbac33ea80b0c052609" exitCode=0 Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.425439 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" event={"ID":"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d","Type":"ContainerDied","Data":"a6a80e62422fd84112787a708bdbfdc47db57d541b055dbac33ea80b0c052609"} Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.426417 5108 scope.go:117] "RemoveContainer" containerID="a6a80e62422fd84112787a708bdbfdc47db57d541b055dbac33ea80b0c052609" Jan 04 00:38:54 crc kubenswrapper[5108]: E0104 00:38:54.426799 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2_service-telemetry(2d896738-b0d6-4d0a-81b6-3e24ac1ce92d)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" podUID="2d896738-b0d6-4d0a-81b6-3e24ac1ce92d" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.468087 5108 scope.go:117] "RemoveContainer" containerID="d7d72a17febddc6734c1cf2f9375c298d6306de4542fa2657bfab09bf66bca3f" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.469692 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc18b015-2dc5-4ecf-a373-a9a04b7ab311" path="/var/lib/kubelet/pods/dc18b015-2dc5-4ecf-a373-a9a04b7ab311/volumes" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.516514 5108 scope.go:117] "RemoveContainer" containerID="635ec4300052024794656f02221657197e3fb1c2d9740f7d0ca769f638224bcc" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.576514 5108 scope.go:117] "RemoveContainer" containerID="2e22ac1a114955125b5730ccbe8dbf9acf97f2c95bf658b0babb9469cc0b54fc" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.635819 5108 scope.go:117] "RemoveContainer" containerID="d5ee43f0e4360d0827b21c793e5b97dedc4f9a399353070f00ea11c407f158c2" Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.917760 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:38:54 crc kubenswrapper[5108]: I0104 00:38:54.917882 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.026693 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.223865 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.225935 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.229740 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.230128 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.307944 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/5a987618-a0d0-4689-99c8-2c52b1f183b9-qdr-test-config\") pod \"qdr-test\" (UID: \"5a987618-a0d0-4689-99c8-2c52b1f183b9\") " pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.308041 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/5a987618-a0d0-4689-99c8-2c52b1f183b9-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"5a987618-a0d0-4689-99c8-2c52b1f183b9\") " pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.308072 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh7cb\" (UniqueName: \"kubernetes.io/projected/5a987618-a0d0-4689-99c8-2c52b1f183b9-kube-api-access-dh7cb\") pod \"qdr-test\" (UID: \"5a987618-a0d0-4689-99c8-2c52b1f183b9\") " pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.409436 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/5a987618-a0d0-4689-99c8-2c52b1f183b9-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"5a987618-a0d0-4689-99c8-2c52b1f183b9\") " pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.409716 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dh7cb\" (UniqueName: \"kubernetes.io/projected/5a987618-a0d0-4689-99c8-2c52b1f183b9-kube-api-access-dh7cb\") pod \"qdr-test\" (UID: \"5a987618-a0d0-4689-99c8-2c52b1f183b9\") " pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.410101 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/5a987618-a0d0-4689-99c8-2c52b1f183b9-qdr-test-config\") pod \"qdr-test\" (UID: \"5a987618-a0d0-4689-99c8-2c52b1f183b9\") " pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.411626 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/5a987618-a0d0-4689-99c8-2c52b1f183b9-qdr-test-config\") pod \"qdr-test\" (UID: \"5a987618-a0d0-4689-99c8-2c52b1f183b9\") " pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.421357 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/5a987618-a0d0-4689-99c8-2c52b1f183b9-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"5a987618-a0d0-4689-99c8-2c52b1f183b9\") " pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.435439 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh7cb\" (UniqueName: \"kubernetes.io/projected/5a987618-a0d0-4689-99c8-2c52b1f183b9-kube-api-access-dh7cb\") pod \"qdr-test\" (UID: \"5a987618-a0d0-4689-99c8-2c52b1f183b9\") " pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.566826 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 04 00:38:57 crc kubenswrapper[5108]: I0104 00:38:57.867271 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 04 00:38:58 crc kubenswrapper[5108]: I0104 00:38:58.486168 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"5a987618-a0d0-4689-99c8-2c52b1f183b9","Type":"ContainerStarted","Data":"081b3a9313a2ecdddb1667ab09e1093baccd5af7c970a865c7c76d83a31035e6"} Jan 04 00:39:07 crc kubenswrapper[5108]: I0104 00:39:07.449962 5108 scope.go:117] "RemoveContainer" containerID="9080054861c38c5ccd15f9f7a49c01fd74439ae420ba71b84e293573a01611d3" Jan 04 00:39:07 crc kubenswrapper[5108]: I0104 00:39:07.456776 5108 scope.go:117] "RemoveContainer" containerID="ea609c94edd9178112d390f0e27aeef7834669527a63a16481f1dca2b71804b7" Jan 04 00:39:07 crc kubenswrapper[5108]: I0104 00:39:07.458485 5108 scope.go:117] "RemoveContainer" containerID="0bd9b974b08ccebf28ac59ddf70e8ee8ca01ff277f180aeee5ba59ee48d2a1c6" Jan 04 00:39:08 crc kubenswrapper[5108]: I0104 00:39:08.449454 5108 scope.go:117] "RemoveContainer" containerID="a6a80e62422fd84112787a708bdbfdc47db57d541b055dbac33ea80b0c052609" Jan 04 00:39:09 crc kubenswrapper[5108]: I0104 00:39:09.448577 5108 scope.go:117] "RemoveContainer" containerID="0913ecbe13315a70403c955cee206aa1c751d0a762b8fba78281ddc19e244873" Jan 04 00:39:10 crc kubenswrapper[5108]: I0104 00:39:10.604582 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r" event={"ID":"b08445a1-a583-42f6-b86f-4eb1f0e941d1","Type":"ContainerStarted","Data":"a41cccab6f398677656366966f0bdab1d524db34e874a9527d782bf950ba9ab4"} Jan 04 00:39:10 crc kubenswrapper[5108]: I0104 00:39:10.608768 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz" event={"ID":"fe634a26-6a59-4ba4-b860-9fb7908015ed","Type":"ContainerStarted","Data":"a4683fe1a3855c5a34b27f5fe16163464f02349bcec623ecaf965ae4f2129a30"} Jan 04 00:39:10 crc kubenswrapper[5108]: I0104 00:39:10.613136 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl" event={"ID":"9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04","Type":"ContainerStarted","Data":"b7638ff8ee91926330f6209416754fbab1fafc3bbaf36f6f3f27e98bc6bcac3e"} Jan 04 00:39:10 crc kubenswrapper[5108]: I0104 00:39:10.617100 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v" event={"ID":"67fc1329-a5f0-454d-8fc9-d9e5d6410e13","Type":"ContainerStarted","Data":"8b93b39f79719576fae2778de4779053a1c1248e74b5fdaf4f02a003f268fbaa"} Jan 04 00:39:10 crc kubenswrapper[5108]: I0104 00:39:10.624433 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2" event={"ID":"2d896738-b0d6-4d0a-81b6-3e24ac1ce92d","Type":"ContainerStarted","Data":"81009d7b32c2830b46269c7c857a4f090eae924fc7e847f1a9d768d9d9fe280d"} Jan 04 00:39:10 crc kubenswrapper[5108]: I0104 00:39:10.633985 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"5a987618-a0d0-4689-99c8-2c52b1f183b9","Type":"ContainerStarted","Data":"7b2cde3cfbbd9e91836335a1088ff1b702da044233de34b55ff149c3256f356c"} Jan 04 00:39:10 crc kubenswrapper[5108]: I0104 00:39:10.756859 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.569592393 podStartE2EDuration="13.756821022s" podCreationTimestamp="2026-01-04 00:38:57 +0000 UTC" firstStartedPulling="2026-01-04 00:38:57.903144294 +0000 UTC m=+1711.891709380" lastFinishedPulling="2026-01-04 00:39:10.090372913 +0000 UTC m=+1724.078938009" observedRunningTime="2026-01-04 00:39:10.749612374 +0000 UTC m=+1724.738177480" watchObservedRunningTime="2026-01-04 00:39:10.756821022 +0000 UTC m=+1724.745386108" Jan 04 00:39:11 crc kubenswrapper[5108]: I0104 00:39:11.055073 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-b95s4"] Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.161325 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-b95s4"] Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.161508 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.161657 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.171216 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.174656 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.177962 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.178435 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.180007 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.180479 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.235812 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6thtp\" (UniqueName: \"kubernetes.io/projected/e7829b0a-fd5d-468e-9df6-b265e62d4278-kube-api-access-6thtp\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.237263 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-sensubility-config\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.237332 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.237384 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-publisher\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.237454 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.237490 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-config\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.237596 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-healthcheck-log\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.340321 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-sensubility-config\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.340384 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.340416 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-publisher\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.340448 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.340469 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-config\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.340501 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-healthcheck-log\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.340570 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6thtp\" (UniqueName: \"kubernetes.io/projected/e7829b0a-fd5d-468e-9df6-b265e62d4278-kube-api-access-6thtp\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.342423 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.342577 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-healthcheck-log\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.342442 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-config\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.342795 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-publisher\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.343137 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.343822 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-sensubility-config\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.371015 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6thtp\" (UniqueName: \"kubernetes.io/projected/e7829b0a-fd5d-468e-9df6-b265e62d4278-kube-api-access-6thtp\") pod \"stf-smoketest-smoke1-b95s4\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.384560 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.384779 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.442386 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldvf\" (UniqueName: \"kubernetes.io/projected/dab5a6c3-72e8-4c3e-89fc-9ec118c63783-kube-api-access-sldvf\") pod \"curl\" (UID: \"dab5a6c3-72e8-4c3e-89fc-9ec118c63783\") " pod="service-telemetry/curl" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.513711 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.545459 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sldvf\" (UniqueName: \"kubernetes.io/projected/dab5a6c3-72e8-4c3e-89fc-9ec118c63783-kube-api-access-sldvf\") pod \"curl\" (UID: \"dab5a6c3-72e8-4c3e-89fc-9ec118c63783\") " pod="service-telemetry/curl" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.574055 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sldvf\" (UniqueName: \"kubernetes.io/projected/dab5a6c3-72e8-4c3e-89fc-9ec118c63783-kube-api-access-sldvf\") pod \"curl\" (UID: \"dab5a6c3-72e8-4c3e-89fc-9ec118c63783\") " pod="service-telemetry/curl" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.719815 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 04 00:39:12 crc kubenswrapper[5108]: I0104 00:39:12.839873 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-b95s4"] Jan 04 00:39:13 crc kubenswrapper[5108]: I0104 00:39:13.029837 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 04 00:39:13 crc kubenswrapper[5108]: W0104 00:39:13.034247 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddab5a6c3_72e8_4c3e_89fc_9ec118c63783.slice/crio-703a2dd7785ac1b25a1a843b8e28e3485caa7e017a0712d7bdf37dd3cf707439 WatchSource:0}: Error finding container 703a2dd7785ac1b25a1a843b8e28e3485caa7e017a0712d7bdf37dd3cf707439: Status 404 returned error can't find the container with id 703a2dd7785ac1b25a1a843b8e28e3485caa7e017a0712d7bdf37dd3cf707439 Jan 04 00:39:13 crc kubenswrapper[5108]: I0104 00:39:13.777449 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"dab5a6c3-72e8-4c3e-89fc-9ec118c63783","Type":"ContainerStarted","Data":"703a2dd7785ac1b25a1a843b8e28e3485caa7e017a0712d7bdf37dd3cf707439"} Jan 04 00:39:13 crc kubenswrapper[5108]: I0104 00:39:13.783382 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b95s4" event={"ID":"e7829b0a-fd5d-468e-9df6-b265e62d4278","Type":"ContainerStarted","Data":"6c6099b7115655e30259d9e9050c465e331f470a0782d99c869a2a93b38bedcc"} Jan 04 00:39:18 crc kubenswrapper[5108]: I0104 00:39:18.481402 5108 generic.go:358] "Generic (PLEG): container finished" podID="dab5a6c3-72e8-4c3e-89fc-9ec118c63783" containerID="efe86babc7b865ee16e7b9943a0b9e60b728043fe6dfce335a0a4e32db0be64e" exitCode=0 Jan 04 00:39:18 crc kubenswrapper[5108]: I0104 00:39:18.481579 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"dab5a6c3-72e8-4c3e-89fc-9ec118c63783","Type":"ContainerDied","Data":"efe86babc7b865ee16e7b9943a0b9e60b728043fe6dfce335a0a4e32db0be64e"} Jan 04 00:39:24 crc kubenswrapper[5108]: I0104 00:39:24.917658 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:39:24 crc kubenswrapper[5108]: I0104 00:39:24.918273 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:39:26 crc kubenswrapper[5108]: I0104 00:39:26.996595 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 04 00:39:27 crc kubenswrapper[5108]: I0104 00:39:27.170397 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_dab5a6c3-72e8-4c3e-89fc-9ec118c63783/curl/0.log" Jan 04 00:39:27 crc kubenswrapper[5108]: I0104 00:39:27.176020 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sldvf\" (UniqueName: \"kubernetes.io/projected/dab5a6c3-72e8-4c3e-89fc-9ec118c63783-kube-api-access-sldvf\") pod \"dab5a6c3-72e8-4c3e-89fc-9ec118c63783\" (UID: \"dab5a6c3-72e8-4c3e-89fc-9ec118c63783\") " Jan 04 00:39:27 crc kubenswrapper[5108]: I0104 00:39:27.180475 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab5a6c3-72e8-4c3e-89fc-9ec118c63783-kube-api-access-sldvf" (OuterVolumeSpecName: "kube-api-access-sldvf") pod "dab5a6c3-72e8-4c3e-89fc-9ec118c63783" (UID: "dab5a6c3-72e8-4c3e-89fc-9ec118c63783"). InnerVolumeSpecName "kube-api-access-sldvf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:39:27 crc kubenswrapper[5108]: I0104 00:39:27.278548 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sldvf\" (UniqueName: \"kubernetes.io/projected/dab5a6c3-72e8-4c3e-89fc-9ec118c63783-kube-api-access-sldvf\") on node \"crc\" DevicePath \"\"" Jan 04 00:39:27 crc kubenswrapper[5108]: I0104 00:39:27.435871 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-hjv6t_f61d3277-40d7-4ac1-994c-e64ce83b3fe9/prometheus-webhook-snmp/0.log" Jan 04 00:39:27 crc kubenswrapper[5108]: I0104 00:39:27.583013 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 04 00:39:27 crc kubenswrapper[5108]: I0104 00:39:27.583062 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"dab5a6c3-72e8-4c3e-89fc-9ec118c63783","Type":"ContainerDied","Data":"703a2dd7785ac1b25a1a843b8e28e3485caa7e017a0712d7bdf37dd3cf707439"} Jan 04 00:39:27 crc kubenswrapper[5108]: I0104 00:39:27.584433 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="703a2dd7785ac1b25a1a843b8e28e3485caa7e017a0712d7bdf37dd3cf707439" Jan 04 00:39:27 crc kubenswrapper[5108]: I0104 00:39:27.585163 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b95s4" event={"ID":"e7829b0a-fd5d-468e-9df6-b265e62d4278","Type":"ContainerStarted","Data":"a96bddcd52eeabe4c3deab9ae05e4792ece1c6382f88bce9152aec983e13ab90"} Jan 04 00:39:33 crc kubenswrapper[5108]: I0104 00:39:33.643610 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b95s4" event={"ID":"e7829b0a-fd5d-468e-9df6-b265e62d4278","Type":"ContainerStarted","Data":"5f5ae6b76811d90dd7df8b8dc255fbdf31e8bde78deb1ec6d32e30e13f1c8edd"} Jan 04 00:39:54 crc kubenswrapper[5108]: I0104 00:39:54.916640 5108 patch_prober.go:28] interesting pod/machine-config-daemon-njl5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 04 00:39:54 crc kubenswrapper[5108]: I0104 00:39:54.917508 5108 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 04 00:39:54 crc kubenswrapper[5108]: I0104 00:39:54.917569 5108 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" Jan 04 00:39:54 crc kubenswrapper[5108]: I0104 00:39:54.918346 5108 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57"} pod="openshift-machine-config-operator/machine-config-daemon-njl5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 04 00:39:54 crc kubenswrapper[5108]: I0104 00:39:54.918426 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerName="machine-config-daemon" containerID="cri-o://15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" gracePeriod=600 Jan 04 00:39:55 crc kubenswrapper[5108]: E0104 00:39:55.558980 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:39:55 crc kubenswrapper[5108]: I0104 00:39:55.855681 5108 generic.go:358] "Generic (PLEG): container finished" podID="f377d71c-c91f-4a27-8276-7e06263de9f6" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" exitCode=0 Jan 04 00:39:55 crc kubenswrapper[5108]: I0104 00:39:55.855784 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerDied","Data":"15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57"} Jan 04 00:39:55 crc kubenswrapper[5108]: I0104 00:39:55.856545 5108 scope.go:117] "RemoveContainer" containerID="71e1a23e6a33296265d8312485d92dabf3435cdf7d47549db16b40e0523240ea" Jan 04 00:39:55 crc kubenswrapper[5108]: I0104 00:39:55.857343 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:39:55 crc kubenswrapper[5108]: E0104 00:39:55.857762 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:39:55 crc kubenswrapper[5108]: I0104 00:39:55.895124 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-b95s4" podStartSLOduration=25.084459359 podStartE2EDuration="44.895098236s" podCreationTimestamp="2026-01-04 00:39:11 +0000 UTC" firstStartedPulling="2026-01-04 00:39:12.876548316 +0000 UTC m=+1726.865113402" lastFinishedPulling="2026-01-04 00:39:32.687187183 +0000 UTC m=+1746.675752279" observedRunningTime="2026-01-04 00:39:33.672458014 +0000 UTC m=+1747.661023120" watchObservedRunningTime="2026-01-04 00:39:55.895098236 +0000 UTC m=+1769.883663322" Jan 04 00:39:57 crc kubenswrapper[5108]: I0104 00:39:57.588190 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-hjv6t_f61d3277-40d7-4ac1-994c-e64ce83b3fe9/prometheus-webhook-snmp/0.log" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.139111 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458120-hd4pv"] Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.145739 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dab5a6c3-72e8-4c3e-89fc-9ec118c63783" containerName="curl" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.145797 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab5a6c3-72e8-4c3e-89fc-9ec118c63783" containerName="curl" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.146147 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="dab5a6c3-72e8-4c3e-89fc-9ec118c63783" containerName="curl" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.669744 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458120-hd4pv" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.677893 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.678646 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.680852 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458120-hd4pv"] Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.682398 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.764502 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmrvq\" (UniqueName: \"kubernetes.io/projected/28d457cb-d548-49f2-8d4b-b424a74c750b-kube-api-access-qmrvq\") pod \"auto-csr-approver-29458120-hd4pv\" (UID: \"28d457cb-d548-49f2-8d4b-b424a74c750b\") " pod="openshift-infra/auto-csr-approver-29458120-hd4pv" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.866111 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qmrvq\" (UniqueName: \"kubernetes.io/projected/28d457cb-d548-49f2-8d4b-b424a74c750b-kube-api-access-qmrvq\") pod \"auto-csr-approver-29458120-hd4pv\" (UID: \"28d457cb-d548-49f2-8d4b-b424a74c750b\") " pod="openshift-infra/auto-csr-approver-29458120-hd4pv" Jan 04 00:40:00 crc kubenswrapper[5108]: I0104 00:40:00.896284 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmrvq\" (UniqueName: \"kubernetes.io/projected/28d457cb-d548-49f2-8d4b-b424a74c750b-kube-api-access-qmrvq\") pod \"auto-csr-approver-29458120-hd4pv\" (UID: \"28d457cb-d548-49f2-8d4b-b424a74c750b\") " pod="openshift-infra/auto-csr-approver-29458120-hd4pv" Jan 04 00:40:01 crc kubenswrapper[5108]: I0104 00:40:01.007950 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458120-hd4pv" Jan 04 00:40:01 crc kubenswrapper[5108]: I0104 00:40:01.489228 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458120-hd4pv"] Jan 04 00:40:01 crc kubenswrapper[5108]: I0104 00:40:01.920912 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458120-hd4pv" event={"ID":"28d457cb-d548-49f2-8d4b-b424a74c750b","Type":"ContainerStarted","Data":"42f450bc1a8dd6b345729c702c8dd7853bc091e19d6c8f7987826373f319e712"} Jan 04 00:40:01 crc kubenswrapper[5108]: I0104 00:40:01.924647 5108 generic.go:358] "Generic (PLEG): container finished" podID="e7829b0a-fd5d-468e-9df6-b265e62d4278" containerID="a96bddcd52eeabe4c3deab9ae05e4792ece1c6382f88bce9152aec983e13ab90" exitCode=0 Jan 04 00:40:01 crc kubenswrapper[5108]: I0104 00:40:01.924827 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b95s4" event={"ID":"e7829b0a-fd5d-468e-9df6-b265e62d4278","Type":"ContainerDied","Data":"a96bddcd52eeabe4c3deab9ae05e4792ece1c6382f88bce9152aec983e13ab90"} Jan 04 00:40:01 crc kubenswrapper[5108]: I0104 00:40:01.926408 5108 scope.go:117] "RemoveContainer" containerID="a96bddcd52eeabe4c3deab9ae05e4792ece1c6382f88bce9152aec983e13ab90" Jan 04 00:40:03 crc kubenswrapper[5108]: I0104 00:40:03.948388 5108 generic.go:358] "Generic (PLEG): container finished" podID="28d457cb-d548-49f2-8d4b-b424a74c750b" containerID="84e20ebf8354c3b516bac8d26034d381a32e2d3925e7438d63792f94afcfb7b6" exitCode=0 Jan 04 00:40:03 crc kubenswrapper[5108]: I0104 00:40:03.949318 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458120-hd4pv" event={"ID":"28d457cb-d548-49f2-8d4b-b424a74c750b","Type":"ContainerDied","Data":"84e20ebf8354c3b516bac8d26034d381a32e2d3925e7438d63792f94afcfb7b6"} Jan 04 00:40:05 crc kubenswrapper[5108]: I0104 00:40:05.211793 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458120-hd4pv" Jan 04 00:40:05 crc kubenswrapper[5108]: I0104 00:40:05.248601 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmrvq\" (UniqueName: \"kubernetes.io/projected/28d457cb-d548-49f2-8d4b-b424a74c750b-kube-api-access-qmrvq\") pod \"28d457cb-d548-49f2-8d4b-b424a74c750b\" (UID: \"28d457cb-d548-49f2-8d4b-b424a74c750b\") " Jan 04 00:40:05 crc kubenswrapper[5108]: I0104 00:40:05.261102 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d457cb-d548-49f2-8d4b-b424a74c750b-kube-api-access-qmrvq" (OuterVolumeSpecName: "kube-api-access-qmrvq") pod "28d457cb-d548-49f2-8d4b-b424a74c750b" (UID: "28d457cb-d548-49f2-8d4b-b424a74c750b"). InnerVolumeSpecName "kube-api-access-qmrvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:40:05 crc kubenswrapper[5108]: I0104 00:40:05.350526 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qmrvq\" (UniqueName: \"kubernetes.io/projected/28d457cb-d548-49f2-8d4b-b424a74c750b-kube-api-access-qmrvq\") on node \"crc\" DevicePath \"\"" Jan 04 00:40:05 crc kubenswrapper[5108]: I0104 00:40:05.968237 5108 generic.go:358] "Generic (PLEG): container finished" podID="e7829b0a-fd5d-468e-9df6-b265e62d4278" containerID="5f5ae6b76811d90dd7df8b8dc255fbdf31e8bde78deb1ec6d32e30e13f1c8edd" exitCode=0 Jan 04 00:40:05 crc kubenswrapper[5108]: I0104 00:40:05.968375 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b95s4" event={"ID":"e7829b0a-fd5d-468e-9df6-b265e62d4278","Type":"ContainerDied","Data":"5f5ae6b76811d90dd7df8b8dc255fbdf31e8bde78deb1ec6d32e30e13f1c8edd"} Jan 04 00:40:05 crc kubenswrapper[5108]: I0104 00:40:05.972079 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458120-hd4pv" event={"ID":"28d457cb-d548-49f2-8d4b-b424a74c750b","Type":"ContainerDied","Data":"42f450bc1a8dd6b345729c702c8dd7853bc091e19d6c8f7987826373f319e712"} Jan 04 00:40:05 crc kubenswrapper[5108]: I0104 00:40:05.972126 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42f450bc1a8dd6b345729c702c8dd7853bc091e19d6c8f7987826373f319e712" Jan 04 00:40:05 crc kubenswrapper[5108]: I0104 00:40:05.972129 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458120-hd4pv" Jan 04 00:40:06 crc kubenswrapper[5108]: I0104 00:40:06.309940 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458114-bbwkb"] Jan 04 00:40:06 crc kubenswrapper[5108]: I0104 00:40:06.315411 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458114-bbwkb"] Jan 04 00:40:06 crc kubenswrapper[5108]: I0104 00:40:06.458688 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2866724-88f6-46f3-87c3-d8b7af442d87" path="/var/lib/kubelet/pods/d2866724-88f6-46f3-87c3-d8b7af442d87/volumes" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.289897 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.389953 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-healthcheck-log\") pod \"e7829b0a-fd5d-468e-9df6-b265e62d4278\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.390020 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-entrypoint-script\") pod \"e7829b0a-fd5d-468e-9df6-b265e62d4278\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.390073 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-config\") pod \"e7829b0a-fd5d-468e-9df6-b265e62d4278\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.390187 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-entrypoint-script\") pod \"e7829b0a-fd5d-468e-9df6-b265e62d4278\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.390262 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6thtp\" (UniqueName: \"kubernetes.io/projected/e7829b0a-fd5d-468e-9df6-b265e62d4278-kube-api-access-6thtp\") pod \"e7829b0a-fd5d-468e-9df6-b265e62d4278\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.390326 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-publisher\") pod \"e7829b0a-fd5d-468e-9df6-b265e62d4278\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.390377 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-sensubility-config\") pod \"e7829b0a-fd5d-468e-9df6-b265e62d4278\" (UID: \"e7829b0a-fd5d-468e-9df6-b265e62d4278\") " Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.398559 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7829b0a-fd5d-468e-9df6-b265e62d4278-kube-api-access-6thtp" (OuterVolumeSpecName: "kube-api-access-6thtp") pod "e7829b0a-fd5d-468e-9df6-b265e62d4278" (UID: "e7829b0a-fd5d-468e-9df6-b265e62d4278"). InnerVolumeSpecName "kube-api-access-6thtp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.412095 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "e7829b0a-fd5d-468e-9df6-b265e62d4278" (UID: "e7829b0a-fd5d-468e-9df6-b265e62d4278"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.412121 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "e7829b0a-fd5d-468e-9df6-b265e62d4278" (UID: "e7829b0a-fd5d-468e-9df6-b265e62d4278"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.413830 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "e7829b0a-fd5d-468e-9df6-b265e62d4278" (UID: "e7829b0a-fd5d-468e-9df6-b265e62d4278"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.414963 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "e7829b0a-fd5d-468e-9df6-b265e62d4278" (UID: "e7829b0a-fd5d-468e-9df6-b265e62d4278"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.425585 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "e7829b0a-fd5d-468e-9df6-b265e62d4278" (UID: "e7829b0a-fd5d-468e-9df6-b265e62d4278"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.446282 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "e7829b0a-fd5d-468e-9df6-b265e62d4278" (UID: "e7829b0a-fd5d-468e-9df6-b265e62d4278"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.449156 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:40:07 crc kubenswrapper[5108]: E0104 00:40:07.449765 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.493690 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.494506 5108 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.494521 5108 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.494532 5108 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.494543 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.494553 5108 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e7829b0a-fd5d-468e-9df6-b265e62d4278-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.494566 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6thtp\" (UniqueName: \"kubernetes.io/projected/e7829b0a-fd5d-468e-9df6-b265e62d4278-kube-api-access-6thtp\") on node \"crc\" DevicePath \"\"" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.992662 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-b95s4" event={"ID":"e7829b0a-fd5d-468e-9df6-b265e62d4278","Type":"ContainerDied","Data":"6c6099b7115655e30259d9e9050c465e331f470a0782d99c869a2a93b38bedcc"} Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.993122 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c6099b7115655e30259d9e9050c465e331f470a0782d99c869a2a93b38bedcc" Jan 04 00:40:07 crc kubenswrapper[5108]: I0104 00:40:07.992712 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-b95s4" Jan 04 00:40:09 crc kubenswrapper[5108]: I0104 00:40:09.353138 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-b95s4_e7829b0a-fd5d-468e-9df6-b265e62d4278/smoketest-collectd/0.log" Jan 04 00:40:09 crc kubenswrapper[5108]: I0104 00:40:09.692635 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-b95s4_e7829b0a-fd5d-468e-9df6-b265e62d4278/smoketest-ceilometer/0.log" Jan 04 00:40:10 crc kubenswrapper[5108]: I0104 00:40:10.033757 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-6lxlx_f8ca2ca4-0523-4d2d-bec7-2c12ed1be637/default-interconnect/0.log" Jan 04 00:40:10 crc kubenswrapper[5108]: I0104 00:40:10.303088 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl_9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04/bridge/2.log" Jan 04 00:40:10 crc kubenswrapper[5108]: I0104 00:40:10.586972 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-t55tl_9ff01f9b-6047-4dd0-87e4-23cf8ca4fb04/sg-core/0.log" Jan 04 00:40:10 crc kubenswrapper[5108]: I0104 00:40:10.847261 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r_b08445a1-a583-42f6-b86f-4eb1f0e941d1/bridge/2.log" Jan 04 00:40:11 crc kubenswrapper[5108]: I0104 00:40:11.185717 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-56b99fcf8c-czs2r_b08445a1-a583-42f6-b86f-4eb1f0e941d1/sg-core/0.log" Jan 04 00:40:11 crc kubenswrapper[5108]: I0104 00:40:11.518125 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2_2d896738-b0d6-4d0a-81b6-3e24ac1ce92d/bridge/2.log" Jan 04 00:40:11 crc kubenswrapper[5108]: I0104 00:40:11.806685 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-jdcj2_2d896738-b0d6-4d0a-81b6-3e24ac1ce92d/sg-core/0.log" Jan 04 00:40:12 crc kubenswrapper[5108]: I0104 00:40:12.126954 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v_67fc1329-a5f0-454d-8fc9-d9e5d6410e13/bridge/2.log" Jan 04 00:40:12 crc kubenswrapper[5108]: I0104 00:40:12.407642 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-564c549f5c-9xf9v_67fc1329-a5f0-454d-8fc9-d9e5d6410e13/sg-core/0.log" Jan 04 00:40:12 crc kubenswrapper[5108]: I0104 00:40:12.721587 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz_fe634a26-6a59-4ba4-b860-9fb7908015ed/bridge/2.log" Jan 04 00:40:13 crc kubenswrapper[5108]: I0104 00:40:13.019230 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-kx9cz_fe634a26-6a59-4ba4-b860-9fb7908015ed/sg-core/0.log" Jan 04 00:40:16 crc kubenswrapper[5108]: I0104 00:40:16.703816 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-6668876698-qlfqx_6eca10e1-2858-49cb-97a4-a53149ea7ceb/operator/0.log" Jan 04 00:40:17 crc kubenswrapper[5108]: I0104 00:40:17.000845 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_bbb51482-bfac-4350-9ec7-b9470cbf4b19/prometheus/0.log" Jan 04 00:40:17 crc kubenswrapper[5108]: I0104 00:40:17.303809 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_8a56d552-f484-43ef-9f02-ea72cc80b853/elasticsearch/0.log" Jan 04 00:40:17 crc kubenswrapper[5108]: I0104 00:40:17.574635 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-hjv6t_f61d3277-40d7-4ac1-994c-e64ce83b3fe9/prometheus-webhook-snmp/0.log" Jan 04 00:40:17 crc kubenswrapper[5108]: I0104 00:40:17.848853 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_52617309-d688-4e3c-8a64-1894511950bc/alertmanager/0.log" Jan 04 00:40:21 crc kubenswrapper[5108]: I0104 00:40:21.448788 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:40:21 crc kubenswrapper[5108]: E0104 00:40:21.449948 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:40:27 crc kubenswrapper[5108]: I0104 00:40:27.186362 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:40:27 crc kubenswrapper[5108]: I0104 00:40:27.186581 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzs5n_8f4ef11a-e50f-4ed2-88f5-8cb0eef1af23/kube-multus/0.log" Jan 04 00:40:27 crc kubenswrapper[5108]: I0104 00:40:27.203161 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:40:27 crc kubenswrapper[5108]: I0104 00:40:27.204030 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 04 00:40:33 crc kubenswrapper[5108]: I0104 00:40:33.052328 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-845d76977f-skznp_90824245-ac48-46b3-890a-0aff0a7a62a1/operator/0.log" Jan 04 00:40:33 crc kubenswrapper[5108]: I0104 00:40:33.251791 5108 scope.go:117] "RemoveContainer" containerID="0fc67ad9731bd2df6910905b5525ba29136a7cd6f9212baeaa39dd25a5a328b9" Jan 04 00:40:33 crc kubenswrapper[5108]: I0104 00:40:33.450988 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:40:33 crc kubenswrapper[5108]: E0104 00:40:33.451963 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:40:36 crc kubenswrapper[5108]: I0104 00:40:36.697823 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-6668876698-qlfqx_6eca10e1-2858-49cb-97a4-a53149ea7ceb/operator/0.log" Jan 04 00:40:37 crc kubenswrapper[5108]: I0104 00:40:37.025032 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_5a987618-a0d0-4689-99c8-2c52b1f183b9/qdr/0.log" Jan 04 00:40:46 crc kubenswrapper[5108]: I0104 00:40:46.470966 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:40:46 crc kubenswrapper[5108]: E0104 00:40:46.472680 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:40:59 crc kubenswrapper[5108]: I0104 00:40:59.449751 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:40:59 crc kubenswrapper[5108]: E0104 00:40:59.451184 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.084169 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mmbkm/must-gather-6ndt7"] Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.085484 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28d457cb-d548-49f2-8d4b-b424a74c750b" containerName="oc" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.085501 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d457cb-d548-49f2-8d4b-b424a74c750b" containerName="oc" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.085517 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7829b0a-fd5d-468e-9df6-b265e62d4278" containerName="smoketest-collectd" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.085523 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7829b0a-fd5d-468e-9df6-b265e62d4278" containerName="smoketest-collectd" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.085564 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7829b0a-fd5d-468e-9df6-b265e62d4278" containerName="smoketest-ceilometer" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.085570 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7829b0a-fd5d-468e-9df6-b265e62d4278" containerName="smoketest-ceilometer" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.085705 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7829b0a-fd5d-468e-9df6-b265e62d4278" containerName="smoketest-collectd" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.085719 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="28d457cb-d548-49f2-8d4b-b424a74c750b" containerName="oc" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.085726 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7829b0a-fd5d-468e-9df6-b265e62d4278" containerName="smoketest-ceilometer" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.163317 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.170180 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mmbkm/must-gather-6ndt7"] Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.172797 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-mmbkm\"/\"openshift-service-ca.crt\"" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.173644 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-mmbkm\"/\"kube-root-ca.crt\"" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.173906 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-mmbkm\"/\"default-dockercfg-x7s5t\"" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.237472 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45rb\" (UniqueName: \"kubernetes.io/projected/8278d449-817f-4674-96e9-5b8d48b2cb11-kube-api-access-d45rb\") pod \"must-gather-6ndt7\" (UID: \"8278d449-817f-4674-96e9-5b8d48b2cb11\") " pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.237913 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8278d449-817f-4674-96e9-5b8d48b2cb11-must-gather-output\") pod \"must-gather-6ndt7\" (UID: \"8278d449-817f-4674-96e9-5b8d48b2cb11\") " pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.339733 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8278d449-817f-4674-96e9-5b8d48b2cb11-must-gather-output\") pod \"must-gather-6ndt7\" (UID: \"8278d449-817f-4674-96e9-5b8d48b2cb11\") " pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.339807 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d45rb\" (UniqueName: \"kubernetes.io/projected/8278d449-817f-4674-96e9-5b8d48b2cb11-kube-api-access-d45rb\") pod \"must-gather-6ndt7\" (UID: \"8278d449-817f-4674-96e9-5b8d48b2cb11\") " pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.341119 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8278d449-817f-4674-96e9-5b8d48b2cb11-must-gather-output\") pod \"must-gather-6ndt7\" (UID: \"8278d449-817f-4674-96e9-5b8d48b2cb11\") " pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.364290 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45rb\" (UniqueName: \"kubernetes.io/projected/8278d449-817f-4674-96e9-5b8d48b2cb11-kube-api-access-d45rb\") pod \"must-gather-6ndt7\" (UID: \"8278d449-817f-4674-96e9-5b8d48b2cb11\") " pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.518375 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:41:03 crc kubenswrapper[5108]: I0104 00:41:03.744270 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mmbkm/must-gather-6ndt7"] Jan 04 00:41:04 crc kubenswrapper[5108]: I0104 00:41:04.519519 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" event={"ID":"8278d449-817f-4674-96e9-5b8d48b2cb11","Type":"ContainerStarted","Data":"fef3fc2d385bfc8b87d44fe8106b3adc2a7269eb74cadd5c002cf352001a0f2e"} Jan 04 00:41:11 crc kubenswrapper[5108]: I0104 00:41:11.589760 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" event={"ID":"8278d449-817f-4674-96e9-5b8d48b2cb11","Type":"ContainerStarted","Data":"b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e"} Jan 04 00:41:11 crc kubenswrapper[5108]: I0104 00:41:11.591700 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" event={"ID":"8278d449-817f-4674-96e9-5b8d48b2cb11","Type":"ContainerStarted","Data":"54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39"} Jan 04 00:41:11 crc kubenswrapper[5108]: I0104 00:41:11.613926 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" podStartSLOduration=2.047290757 podStartE2EDuration="8.613902462s" podCreationTimestamp="2026-01-04 00:41:03 +0000 UTC" firstStartedPulling="2026-01-04 00:41:03.761125616 +0000 UTC m=+1837.749690702" lastFinishedPulling="2026-01-04 00:41:10.327737301 +0000 UTC m=+1844.316302407" observedRunningTime="2026-01-04 00:41:11.609140311 +0000 UTC m=+1845.597705417" watchObservedRunningTime="2026-01-04 00:41:11.613902462 +0000 UTC m=+1845.602467558" Jan 04 00:41:12 crc kubenswrapper[5108]: I0104 00:41:12.449905 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:41:12 crc kubenswrapper[5108]: E0104 00:41:12.450443 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:41:24 crc kubenswrapper[5108]: I0104 00:41:24.448933 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:41:24 crc kubenswrapper[5108]: E0104 00:41:24.452401 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:41:35 crc kubenswrapper[5108]: I0104 00:41:35.448716 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:41:35 crc kubenswrapper[5108]: E0104 00:41:35.449606 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:41:47 crc kubenswrapper[5108]: I0104 00:41:47.448564 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:41:47 crc kubenswrapper[5108]: E0104 00:41:47.449855 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:41:55 crc kubenswrapper[5108]: I0104 00:41:55.427116 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-pn9xb_12382f58-cdec-4d79-abf7-f9281092d8f0/control-plane-machine-set-operator/0.log" Jan 04 00:41:55 crc kubenswrapper[5108]: I0104 00:41:55.631758 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-jzcn5_948d9eda-ff2a-4ee3-913b-6a3f19481ee5/kube-rbac-proxy/0.log" Jan 04 00:41:55 crc kubenswrapper[5108]: I0104 00:41:55.698704 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-jzcn5_948d9eda-ff2a-4ee3-913b-6a3f19481ee5/machine-api-operator/0.log" Jan 04 00:41:59 crc kubenswrapper[5108]: I0104 00:41:59.448767 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:41:59 crc kubenswrapper[5108]: E0104 00:41:59.449457 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.142302 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458122-r6gmv"] Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.157109 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458122-r6gmv"] Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.157398 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458122-r6gmv" Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.159996 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.162274 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.163305 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.237407 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vclqj\" (UniqueName: \"kubernetes.io/projected/3db32869-ec8e-474c-8bca-7a95f3fa9fe8-kube-api-access-vclqj\") pod \"auto-csr-approver-29458122-r6gmv\" (UID: \"3db32869-ec8e-474c-8bca-7a95f3fa9fe8\") " pod="openshift-infra/auto-csr-approver-29458122-r6gmv" Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.339256 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vclqj\" (UniqueName: \"kubernetes.io/projected/3db32869-ec8e-474c-8bca-7a95f3fa9fe8-kube-api-access-vclqj\") pod \"auto-csr-approver-29458122-r6gmv\" (UID: \"3db32869-ec8e-474c-8bca-7a95f3fa9fe8\") " pod="openshift-infra/auto-csr-approver-29458122-r6gmv" Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.363298 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vclqj\" (UniqueName: \"kubernetes.io/projected/3db32869-ec8e-474c-8bca-7a95f3fa9fe8-kube-api-access-vclqj\") pod \"auto-csr-approver-29458122-r6gmv\" (UID: \"3db32869-ec8e-474c-8bca-7a95f3fa9fe8\") " pod="openshift-infra/auto-csr-approver-29458122-r6gmv" Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.478184 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458122-r6gmv" Jan 04 00:42:00 crc kubenswrapper[5108]: I0104 00:42:00.764509 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458122-r6gmv"] Jan 04 00:42:01 crc kubenswrapper[5108]: I0104 00:42:01.048566 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458122-r6gmv" event={"ID":"3db32869-ec8e-474c-8bca-7a95f3fa9fe8","Type":"ContainerStarted","Data":"0c7a1d1bba80a0d29761403fd683c0e8a6eb7bd3dfac7dc0fc14d8f6172c55f8"} Jan 04 00:42:02 crc kubenswrapper[5108]: I0104 00:42:02.060920 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458122-r6gmv" event={"ID":"3db32869-ec8e-474c-8bca-7a95f3fa9fe8","Type":"ContainerStarted","Data":"aa2953dd335086a5fc44fa384082aeb947ad9df97435f766cb042e3015f21ea6"} Jan 04 00:42:02 crc kubenswrapper[5108]: I0104 00:42:02.081339 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29458122-r6gmv" podStartSLOduration=1.352754348 podStartE2EDuration="2.081312049s" podCreationTimestamp="2026-01-04 00:42:00 +0000 UTC" firstStartedPulling="2026-01-04 00:42:00.766951174 +0000 UTC m=+1894.755516260" lastFinishedPulling="2026-01-04 00:42:01.495508875 +0000 UTC m=+1895.484073961" observedRunningTime="2026-01-04 00:42:02.080223909 +0000 UTC m=+1896.068788995" watchObservedRunningTime="2026-01-04 00:42:02.081312049 +0000 UTC m=+1896.069877135" Jan 04 00:42:03 crc kubenswrapper[5108]: I0104 00:42:03.071774 5108 generic.go:358] "Generic (PLEG): container finished" podID="3db32869-ec8e-474c-8bca-7a95f3fa9fe8" containerID="aa2953dd335086a5fc44fa384082aeb947ad9df97435f766cb042e3015f21ea6" exitCode=0 Jan 04 00:42:03 crc kubenswrapper[5108]: I0104 00:42:03.071864 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458122-r6gmv" event={"ID":"3db32869-ec8e-474c-8bca-7a95f3fa9fe8","Type":"ContainerDied","Data":"aa2953dd335086a5fc44fa384082aeb947ad9df97435f766cb042e3015f21ea6"} Jan 04 00:42:04 crc kubenswrapper[5108]: I0104 00:42:04.350970 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458122-r6gmv" Jan 04 00:42:04 crc kubenswrapper[5108]: I0104 00:42:04.419377 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vclqj\" (UniqueName: \"kubernetes.io/projected/3db32869-ec8e-474c-8bca-7a95f3fa9fe8-kube-api-access-vclqj\") pod \"3db32869-ec8e-474c-8bca-7a95f3fa9fe8\" (UID: \"3db32869-ec8e-474c-8bca-7a95f3fa9fe8\") " Jan 04 00:42:04 crc kubenswrapper[5108]: I0104 00:42:04.431421 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db32869-ec8e-474c-8bca-7a95f3fa9fe8-kube-api-access-vclqj" (OuterVolumeSpecName: "kube-api-access-vclqj") pod "3db32869-ec8e-474c-8bca-7a95f3fa9fe8" (UID: "3db32869-ec8e-474c-8bca-7a95f3fa9fe8"). InnerVolumeSpecName "kube-api-access-vclqj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:42:04 crc kubenswrapper[5108]: I0104 00:42:04.521610 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vclqj\" (UniqueName: \"kubernetes.io/projected/3db32869-ec8e-474c-8bca-7a95f3fa9fe8-kube-api-access-vclqj\") on node \"crc\" DevicePath \"\"" Jan 04 00:42:05 crc kubenswrapper[5108]: I0104 00:42:05.090294 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458122-r6gmv" event={"ID":"3db32869-ec8e-474c-8bca-7a95f3fa9fe8","Type":"ContainerDied","Data":"0c7a1d1bba80a0d29761403fd683c0e8a6eb7bd3dfac7dc0fc14d8f6172c55f8"} Jan 04 00:42:05 crc kubenswrapper[5108]: I0104 00:42:05.090339 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458122-r6gmv" Jan 04 00:42:05 crc kubenswrapper[5108]: I0104 00:42:05.090367 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c7a1d1bba80a0d29761403fd683c0e8a6eb7bd3dfac7dc0fc14d8f6172c55f8" Jan 04 00:42:05 crc kubenswrapper[5108]: I0104 00:42:05.140671 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458116-75t67"] Jan 04 00:42:05 crc kubenswrapper[5108]: I0104 00:42:05.146713 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458116-75t67"] Jan 04 00:42:06 crc kubenswrapper[5108]: I0104 00:42:06.458401 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a303b7-6544-4341-8518-88b23ca64ce5" path="/var/lib/kubelet/pods/a1a303b7-6544-4341-8518-88b23ca64ce5/volumes" Jan 04 00:42:08 crc kubenswrapper[5108]: I0104 00:42:08.551894 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-4kp96_f6660297-af47-40ae-b909-73f073b53693/cert-manager-controller/0.log" Jan 04 00:42:08 crc kubenswrapper[5108]: I0104 00:42:08.714781 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-cnrpf_331877d2-3f29-4eac-897c-010b1d98fda4/cert-manager-cainjector/0.log" Jan 04 00:42:08 crc kubenswrapper[5108]: I0104 00:42:08.782669 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-72zhx_64fc2ae4-d44c-4843-9750-971e567d50c3/cert-manager-webhook/0.log" Jan 04 00:42:10 crc kubenswrapper[5108]: I0104 00:42:10.449525 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:42:10 crc kubenswrapper[5108]: E0104 00:42:10.450585 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.006165 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9vdrm"] Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.007072 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3db32869-ec8e-474c-8bca-7a95f3fa9fe8" containerName="oc" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.007087 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db32869-ec8e-474c-8bca-7a95f3fa9fe8" containerName="oc" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.007317 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3db32869-ec8e-474c-8bca-7a95f3fa9fe8" containerName="oc" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.039692 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9vdrm"] Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.039987 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.159251 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlng6\" (UniqueName: \"kubernetes.io/projected/3fbf1b7f-ab76-4e97-aee2-68554376d136-kube-api-access-jlng6\") pod \"community-operators-9vdrm\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.159376 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-catalog-content\") pod \"community-operators-9vdrm\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.159514 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-utilities\") pod \"community-operators-9vdrm\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.261516 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jlng6\" (UniqueName: \"kubernetes.io/projected/3fbf1b7f-ab76-4e97-aee2-68554376d136-kube-api-access-jlng6\") pod \"community-operators-9vdrm\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.261571 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-catalog-content\") pod \"community-operators-9vdrm\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.261653 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-utilities\") pod \"community-operators-9vdrm\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.262196 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-catalog-content\") pod \"community-operators-9vdrm\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.262262 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-utilities\") pod \"community-operators-9vdrm\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.293754 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlng6\" (UniqueName: \"kubernetes.io/projected/3fbf1b7f-ab76-4e97-aee2-68554376d136-kube-api-access-jlng6\") pod \"community-operators-9vdrm\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.378924 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:12 crc kubenswrapper[5108]: I0104 00:42:12.970973 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9vdrm"] Jan 04 00:42:13 crc kubenswrapper[5108]: I0104 00:42:13.169266 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vdrm" event={"ID":"3fbf1b7f-ab76-4e97-aee2-68554376d136","Type":"ContainerStarted","Data":"04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91"} Jan 04 00:42:13 crc kubenswrapper[5108]: I0104 00:42:13.169351 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vdrm" event={"ID":"3fbf1b7f-ab76-4e97-aee2-68554376d136","Type":"ContainerStarted","Data":"6a12d35facd9f08d18dfe73b011bd2c6b187e7fb84db95a1b665e785672fb22d"} Jan 04 00:42:14 crc kubenswrapper[5108]: I0104 00:42:14.181096 5108 generic.go:358] "Generic (PLEG): container finished" podID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerID="04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91" exitCode=0 Jan 04 00:42:14 crc kubenswrapper[5108]: I0104 00:42:14.181234 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vdrm" event={"ID":"3fbf1b7f-ab76-4e97-aee2-68554376d136","Type":"ContainerDied","Data":"04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91"} Jan 04 00:42:15 crc kubenswrapper[5108]: I0104 00:42:15.193793 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vdrm" event={"ID":"3fbf1b7f-ab76-4e97-aee2-68554376d136","Type":"ContainerStarted","Data":"357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651"} Jan 04 00:42:16 crc kubenswrapper[5108]: I0104 00:42:16.205272 5108 generic.go:358] "Generic (PLEG): container finished" podID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerID="357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651" exitCode=0 Jan 04 00:42:16 crc kubenswrapper[5108]: I0104 00:42:16.205381 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vdrm" event={"ID":"3fbf1b7f-ab76-4e97-aee2-68554376d136","Type":"ContainerDied","Data":"357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651"} Jan 04 00:42:17 crc kubenswrapper[5108]: I0104 00:42:17.218031 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vdrm" event={"ID":"3fbf1b7f-ab76-4e97-aee2-68554376d136","Type":"ContainerStarted","Data":"bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b"} Jan 04 00:42:17 crc kubenswrapper[5108]: I0104 00:42:17.256318 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9vdrm" podStartSLOduration=5.675178943 podStartE2EDuration="6.256293778s" podCreationTimestamp="2026-01-04 00:42:11 +0000 UTC" firstStartedPulling="2026-01-04 00:42:14.182390686 +0000 UTC m=+1908.170955772" lastFinishedPulling="2026-01-04 00:42:14.763505521 +0000 UTC m=+1908.752070607" observedRunningTime="2026-01-04 00:42:17.253441279 +0000 UTC m=+1911.242006365" watchObservedRunningTime="2026-01-04 00:42:17.256293778 +0000 UTC m=+1911.244859104" Jan 04 00:42:22 crc kubenswrapper[5108]: I0104 00:42:22.379865 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:22 crc kubenswrapper[5108]: I0104 00:42:22.380288 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:22 crc kubenswrapper[5108]: I0104 00:42:22.426312 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:22 crc kubenswrapper[5108]: I0104 00:42:22.450228 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:42:22 crc kubenswrapper[5108]: E0104 00:42:22.450757 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:42:23 crc kubenswrapper[5108]: I0104 00:42:23.341158 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:23 crc kubenswrapper[5108]: I0104 00:42:23.399538 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9vdrm"] Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.303126 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9vdrm" podUID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerName="registry-server" containerID="cri-o://bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b" gracePeriod=2 Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.710338 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.771172 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm_e2e9b244-16b4-4e6b-a6cf-e82f0d019f72/util/0.log" Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.804925 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlng6\" (UniqueName: \"kubernetes.io/projected/3fbf1b7f-ab76-4e97-aee2-68554376d136-kube-api-access-jlng6\") pod \"3fbf1b7f-ab76-4e97-aee2-68554376d136\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.805256 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-catalog-content\") pod \"3fbf1b7f-ab76-4e97-aee2-68554376d136\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.809483 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-utilities\") pod \"3fbf1b7f-ab76-4e97-aee2-68554376d136\" (UID: \"3fbf1b7f-ab76-4e97-aee2-68554376d136\") " Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.810819 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-utilities" (OuterVolumeSpecName: "utilities") pod "3fbf1b7f-ab76-4e97-aee2-68554376d136" (UID: "3fbf1b7f-ab76-4e97-aee2-68554376d136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.818539 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbf1b7f-ab76-4e97-aee2-68554376d136-kube-api-access-jlng6" (OuterVolumeSpecName: "kube-api-access-jlng6") pod "3fbf1b7f-ab76-4e97-aee2-68554376d136" (UID: "3fbf1b7f-ab76-4e97-aee2-68554376d136"). InnerVolumeSpecName "kube-api-access-jlng6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.864116 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fbf1b7f-ab76-4e97-aee2-68554376d136" (UID: "3fbf1b7f-ab76-4e97-aee2-68554376d136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.911881 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.911935 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbf1b7f-ab76-4e97-aee2-68554376d136-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:42:25 crc kubenswrapper[5108]: I0104 00:42:25.911954 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jlng6\" (UniqueName: \"kubernetes.io/projected/3fbf1b7f-ab76-4e97-aee2-68554376d136-kube-api-access-jlng6\") on node \"crc\" DevicePath \"\"" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.023786 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm_e2e9b244-16b4-4e6b-a6cf-e82f0d019f72/pull/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.031422 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm_e2e9b244-16b4-4e6b-a6cf-e82f0d019f72/util/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.082464 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm_e2e9b244-16b4-4e6b-a6cf-e82f0d019f72/pull/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.259771 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm_e2e9b244-16b4-4e6b-a6cf-e82f0d019f72/util/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.304676 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm_e2e9b244-16b4-4e6b-a6cf-e82f0d019f72/extract/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.308383 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anbxcm_e2e9b244-16b4-4e6b-a6cf-e82f0d019f72/pull/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.331057 5108 generic.go:358] "Generic (PLEG): container finished" podID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerID="bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b" exitCode=0 Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.331181 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vdrm" event={"ID":"3fbf1b7f-ab76-4e97-aee2-68554376d136","Type":"ContainerDied","Data":"bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b"} Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.331249 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vdrm" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.331284 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vdrm" event={"ID":"3fbf1b7f-ab76-4e97-aee2-68554376d136","Type":"ContainerDied","Data":"6a12d35facd9f08d18dfe73b011bd2c6b187e7fb84db95a1b665e785672fb22d"} Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.331310 5108 scope.go:117] "RemoveContainer" containerID="bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.376243 5108 scope.go:117] "RemoveContainer" containerID="357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.379320 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9vdrm"] Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.386462 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9vdrm"] Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.413166 5108 scope.go:117] "RemoveContainer" containerID="04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.437517 5108 scope.go:117] "RemoveContainer" containerID="bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b" Jan 04 00:42:26 crc kubenswrapper[5108]: E0104 00:42:26.437955 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b\": container with ID starting with bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b not found: ID does not exist" containerID="bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.437988 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b"} err="failed to get container status \"bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b\": rpc error: code = NotFound desc = could not find container \"bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b\": container with ID starting with bf6c206896baf427287ae4fc51f20acc16d83ee7f9308747f467a0c525d0973b not found: ID does not exist" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.438011 5108 scope.go:117] "RemoveContainer" containerID="357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651" Jan 04 00:42:26 crc kubenswrapper[5108]: E0104 00:42:26.438693 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651\": container with ID starting with 357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651 not found: ID does not exist" containerID="357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.438720 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651"} err="failed to get container status \"357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651\": rpc error: code = NotFound desc = could not find container \"357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651\": container with ID starting with 357bc5d687532805b1be2fecba7c98722e155beff5f08be72b3c4fe0ca4e9651 not found: ID does not exist" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.438735 5108 scope.go:117] "RemoveContainer" containerID="04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91" Jan 04 00:42:26 crc kubenswrapper[5108]: E0104 00:42:26.439818 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91\": container with ID starting with 04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91 not found: ID does not exist" containerID="04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.439886 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91"} err="failed to get container status \"04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91\": rpc error: code = NotFound desc = could not find container \"04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91\": container with ID starting with 04e1583bda9778bdc7c826070b5c6912bb4b8dd5de470a5ee45880e7fcc06e91 not found: ID does not exist" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.487701 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fbf1b7f-ab76-4e97-aee2-68554376d136" path="/var/lib/kubelet/pods/3fbf1b7f-ab76-4e97-aee2-68554376d136/volumes" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.522495 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t_bfc83f87-93c5-4a13-9807-1f22d71c0214/util/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.704228 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t_bfc83f87-93c5-4a13-9807-1f22d71c0214/pull/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.704317 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t_bfc83f87-93c5-4a13-9807-1f22d71c0214/pull/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.713842 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t_bfc83f87-93c5-4a13-9807-1f22d71c0214/util/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.914835 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t_bfc83f87-93c5-4a13-9807-1f22d71c0214/pull/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.936066 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t_bfc83f87-93c5-4a13-9807-1f22d71c0214/util/0.log" Jan 04 00:42:26 crc kubenswrapper[5108]: I0104 00:42:26.983829 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f4jj4t_bfc83f87-93c5-4a13-9807-1f22d71c0214/extract/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.093731 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw_6ed7bb44-e54e-4477-a030-1b100090455f/util/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.316250 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw_6ed7bb44-e54e-4477-a030-1b100090455f/util/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.372538 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw_6ed7bb44-e54e-4477-a030-1b100090455f/pull/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.373952 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw_6ed7bb44-e54e-4477-a030-1b100090455f/pull/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.526060 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw_6ed7bb44-e54e-4477-a030-1b100090455f/util/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.549511 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw_6ed7bb44-e54e-4477-a030-1b100090455f/pull/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.586528 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e2kmfw_6ed7bb44-e54e-4477-a030-1b100090455f/extract/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.729753 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k_fb4c7df0-1c9a-427b-821a-2efffa9a2a75/util/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.899700 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k_fb4c7df0-1c9a-427b-821a-2efffa9a2a75/pull/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.913937 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k_fb4c7df0-1c9a-427b-821a-2efffa9a2a75/pull/0.log" Jan 04 00:42:27 crc kubenswrapper[5108]: I0104 00:42:27.929469 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k_fb4c7df0-1c9a-427b-821a-2efffa9a2a75/util/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.124573 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k_fb4c7df0-1c9a-427b-821a-2efffa9a2a75/util/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.126391 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k_fb4c7df0-1c9a-427b-821a-2efffa9a2a75/extract/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.126710 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08sm95k_fb4c7df0-1c9a-427b-821a-2efffa9a2a75/pull/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.345135 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkx7t_d7aa0e7e-b827-48db-b42e-bd862f760149/extract-utilities/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.563803 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkx7t_d7aa0e7e-b827-48db-b42e-bd862f760149/extract-content/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.569164 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkx7t_d7aa0e7e-b827-48db-b42e-bd862f760149/extract-utilities/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.569904 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkx7t_d7aa0e7e-b827-48db-b42e-bd862f760149/extract-content/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.842031 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkx7t_d7aa0e7e-b827-48db-b42e-bd862f760149/extract-utilities/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.889416 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkx7t_d7aa0e7e-b827-48db-b42e-bd862f760149/extract-content/0.log" Jan 04 00:42:28 crc kubenswrapper[5108]: I0104 00:42:28.963088 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kkx7t_d7aa0e7e-b827-48db-b42e-bd862f760149/registry-server/0.log" Jan 04 00:42:29 crc kubenswrapper[5108]: I0104 00:42:29.305582 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w78tn_3037a115-bdce-4e65-b199-0b4aef54946f/extract-utilities/0.log" Jan 04 00:42:29 crc kubenswrapper[5108]: I0104 00:42:29.526585 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w78tn_3037a115-bdce-4e65-b199-0b4aef54946f/extract-content/0.log" Jan 04 00:42:29 crc kubenswrapper[5108]: I0104 00:42:29.526807 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w78tn_3037a115-bdce-4e65-b199-0b4aef54946f/extract-utilities/0.log" Jan 04 00:42:29 crc kubenswrapper[5108]: I0104 00:42:29.541970 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w78tn_3037a115-bdce-4e65-b199-0b4aef54946f/extract-content/0.log" Jan 04 00:42:29 crc kubenswrapper[5108]: I0104 00:42:29.723404 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w78tn_3037a115-bdce-4e65-b199-0b4aef54946f/extract-content/0.log" Jan 04 00:42:29 crc kubenswrapper[5108]: I0104 00:42:29.756452 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w78tn_3037a115-bdce-4e65-b199-0b4aef54946f/extract-utilities/0.log" Jan 04 00:42:29 crc kubenswrapper[5108]: I0104 00:42:29.813350 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-qbs7x_a5a3358d-cb42-4f34-9746-87614c392fd0/marketplace-operator/0.log" Jan 04 00:42:30 crc kubenswrapper[5108]: I0104 00:42:30.030401 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-42qln_62fe17f4-5665-44f0-b006-7082ad6b29e7/extract-utilities/0.log" Jan 04 00:42:30 crc kubenswrapper[5108]: I0104 00:42:30.103591 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w78tn_3037a115-bdce-4e65-b199-0b4aef54946f/registry-server/0.log" Jan 04 00:42:30 crc kubenswrapper[5108]: I0104 00:42:30.200046 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-42qln_62fe17f4-5665-44f0-b006-7082ad6b29e7/extract-utilities/0.log" Jan 04 00:42:30 crc kubenswrapper[5108]: I0104 00:42:30.206956 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-42qln_62fe17f4-5665-44f0-b006-7082ad6b29e7/extract-content/0.log" Jan 04 00:42:30 crc kubenswrapper[5108]: I0104 00:42:30.248401 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-42qln_62fe17f4-5665-44f0-b006-7082ad6b29e7/extract-content/0.log" Jan 04 00:42:30 crc kubenswrapper[5108]: I0104 00:42:30.441683 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-42qln_62fe17f4-5665-44f0-b006-7082ad6b29e7/extract-utilities/0.log" Jan 04 00:42:30 crc kubenswrapper[5108]: I0104 00:42:30.473957 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-42qln_62fe17f4-5665-44f0-b006-7082ad6b29e7/extract-content/0.log" Jan 04 00:42:30 crc kubenswrapper[5108]: I0104 00:42:30.653673 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-42qln_62fe17f4-5665-44f0-b006-7082ad6b29e7/registry-server/0.log" Jan 04 00:42:33 crc kubenswrapper[5108]: I0104 00:42:33.425272 5108 scope.go:117] "RemoveContainer" containerID="3aace4c3515de9f0692b8022799b9f32f64aa01111fa4a5dfa8f79f04de10a6d" Jan 04 00:42:34 crc kubenswrapper[5108]: I0104 00:42:34.449366 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:42:34 crc kubenswrapper[5108]: E0104 00:42:34.449868 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:42:43 crc kubenswrapper[5108]: I0104 00:42:43.071096 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-sqk2p_ca471b6c-8fa7-4c07-ad6f-1b8191b591be/prometheus-operator/0.log" Jan 04 00:42:43 crc kubenswrapper[5108]: I0104 00:42:43.186053 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7687c6569-678rm_7a6c9033-f6ec-4239-94fa-43ed16239b94/prometheus-operator-admission-webhook/0.log" Jan 04 00:42:43 crc kubenswrapper[5108]: I0104 00:42:43.281662 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7687c6569-97jkv_c93782ed-1966-449f-b093-10a0e0380729/prometheus-operator-admission-webhook/0.log" Jan 04 00:42:43 crc kubenswrapper[5108]: I0104 00:42:43.473350 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-5cwsw_5a2116a4-eb62-4e6e-99f5-22d8dfed008a/operator/0.log" Jan 04 00:42:43 crc kubenswrapper[5108]: I0104 00:42:43.592530 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-r52pf_f9774351-84ab-432f-a137-73c8ccd87ead/perses-operator/0.log" Jan 04 00:42:49 crc kubenswrapper[5108]: I0104 00:42:49.449637 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:42:49 crc kubenswrapper[5108]: E0104 00:42:49.451014 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:43:01 crc kubenswrapper[5108]: I0104 00:43:01.450347 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:43:01 crc kubenswrapper[5108]: E0104 00:43:01.451855 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:43:12 crc kubenswrapper[5108]: I0104 00:43:12.452500 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:43:12 crc kubenswrapper[5108]: E0104 00:43:12.454048 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:43:25 crc kubenswrapper[5108]: I0104 00:43:25.450359 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:43:25 crc kubenswrapper[5108]: E0104 00:43:25.452363 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:43:26 crc kubenswrapper[5108]: I0104 00:43:26.906300 5108 generic.go:358] "Generic (PLEG): container finished" podID="8278d449-817f-4674-96e9-5b8d48b2cb11" containerID="54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39" exitCode=0 Jan 04 00:43:26 crc kubenswrapper[5108]: I0104 00:43:26.906391 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" event={"ID":"8278d449-817f-4674-96e9-5b8d48b2cb11","Type":"ContainerDied","Data":"54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39"} Jan 04 00:43:26 crc kubenswrapper[5108]: I0104 00:43:26.906907 5108 scope.go:117] "RemoveContainer" containerID="54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39" Jan 04 00:43:27 crc kubenswrapper[5108]: I0104 00:43:27.038911 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mmbkm_must-gather-6ndt7_8278d449-817f-4674-96e9-5b8d48b2cb11/gather/0.log" Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.189369 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mmbkm/must-gather-6ndt7"] Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.190698 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" containerName="copy" containerID="cri-o://b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e" gracePeriod=2 Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.194217 5108 status_manager.go:895] "Failed to get status for pod" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" err="pods \"must-gather-6ndt7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-mmbkm\": no relationship found between node 'crc' and this object" Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.198362 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mmbkm/must-gather-6ndt7"] Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.604239 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mmbkm_must-gather-6ndt7_8278d449-817f-4674-96e9-5b8d48b2cb11/copy/0.log" Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.607347 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.609812 5108 status_manager.go:895] "Failed to get status for pod" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" err="pods \"must-gather-6ndt7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-mmbkm\": no relationship found between node 'crc' and this object" Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.623897 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d45rb\" (UniqueName: \"kubernetes.io/projected/8278d449-817f-4674-96e9-5b8d48b2cb11-kube-api-access-d45rb\") pod \"8278d449-817f-4674-96e9-5b8d48b2cb11\" (UID: \"8278d449-817f-4674-96e9-5b8d48b2cb11\") " Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.624571 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8278d449-817f-4674-96e9-5b8d48b2cb11-must-gather-output\") pod \"8278d449-817f-4674-96e9-5b8d48b2cb11\" (UID: \"8278d449-817f-4674-96e9-5b8d48b2cb11\") " Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.637999 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8278d449-817f-4674-96e9-5b8d48b2cb11-kube-api-access-d45rb" (OuterVolumeSpecName: "kube-api-access-d45rb") pod "8278d449-817f-4674-96e9-5b8d48b2cb11" (UID: "8278d449-817f-4674-96e9-5b8d48b2cb11"). InnerVolumeSpecName "kube-api-access-d45rb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.673389 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8278d449-817f-4674-96e9-5b8d48b2cb11-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8278d449-817f-4674-96e9-5b8d48b2cb11" (UID: "8278d449-817f-4674-96e9-5b8d48b2cb11"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.726710 5108 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8278d449-817f-4674-96e9-5b8d48b2cb11-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.726764 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d45rb\" (UniqueName: \"kubernetes.io/projected/8278d449-817f-4674-96e9-5b8d48b2cb11-kube-api-access-d45rb\") on node \"crc\" DevicePath \"\"" Jan 04 00:43:33 crc kubenswrapper[5108]: I0104 00:43:33.998954 5108 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mmbkm_must-gather-6ndt7_8278d449-817f-4674-96e9-5b8d48b2cb11/copy/0.log" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.001161 5108 generic.go:358] "Generic (PLEG): container finished" podID="8278d449-817f-4674-96e9-5b8d48b2cb11" containerID="b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e" exitCode=143 Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.001306 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.001378 5108 scope.go:117] "RemoveContainer" containerID="b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.005498 5108 status_manager.go:895] "Failed to get status for pod" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" err="pods \"must-gather-6ndt7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-mmbkm\": no relationship found between node 'crc' and this object" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.030871 5108 status_manager.go:895] "Failed to get status for pod" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" pod="openshift-must-gather-mmbkm/must-gather-6ndt7" err="pods \"must-gather-6ndt7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-mmbkm\": no relationship found between node 'crc' and this object" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.035865 5108 scope.go:117] "RemoveContainer" containerID="54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.122312 5108 scope.go:117] "RemoveContainer" containerID="b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e" Jan 04 00:43:34 crc kubenswrapper[5108]: E0104 00:43:34.123049 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e\": container with ID starting with b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e not found: ID does not exist" containerID="b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.123112 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e"} err="failed to get container status \"b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e\": rpc error: code = NotFound desc = could not find container \"b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e\": container with ID starting with b66a48504845ee3edd00a6ca4545bcd9947d65028e3ec15cec89011a3181601e not found: ID does not exist" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.123148 5108 scope.go:117] "RemoveContainer" containerID="54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39" Jan 04 00:43:34 crc kubenswrapper[5108]: E0104 00:43:34.123678 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39\": container with ID starting with 54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39 not found: ID does not exist" containerID="54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.123834 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39"} err="failed to get container status \"54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39\": rpc error: code = NotFound desc = could not find container \"54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39\": container with ID starting with 54d10580730557b98cdfc512065fa79bb282e09f448ad90546181fb0af50ad39 not found: ID does not exist" Jan 04 00:43:34 crc kubenswrapper[5108]: I0104 00:43:34.462147 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" path="/var/lib/kubelet/pods/8278d449-817f-4674-96e9-5b8d48b2cb11/volumes" Jan 04 00:43:37 crc kubenswrapper[5108]: I0104 00:43:37.450001 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:43:37 crc kubenswrapper[5108]: E0104 00:43:37.453388 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:43:51 crc kubenswrapper[5108]: I0104 00:43:51.449612 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:43:51 crc kubenswrapper[5108]: E0104 00:43:51.452820 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.186010 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n8xk5"] Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188375 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerName="registry-server" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188412 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerName="registry-server" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188461 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" containerName="copy" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188474 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" containerName="copy" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188514 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerName="extract-content" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188528 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerName="extract-content" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188547 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerName="extract-utilities" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188559 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerName="extract-utilities" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188583 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" containerName="gather" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188595 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" containerName="gather" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188876 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="3fbf1b7f-ab76-4e97-aee2-68554376d136" containerName="registry-server" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188913 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" containerName="gather" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.188947 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="8278d449-817f-4674-96e9-5b8d48b2cb11" containerName="copy" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.201529 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8xk5"] Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.201946 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.272098 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kglnr\" (UniqueName: \"kubernetes.io/projected/f765720e-d5e0-487a-b99d-ca3019597301-kube-api-access-kglnr\") pod \"certified-operators-n8xk5\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.272182 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-utilities\") pod \"certified-operators-n8xk5\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.272400 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-catalog-content\") pod \"certified-operators-n8xk5\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.373628 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-catalog-content\") pod \"certified-operators-n8xk5\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.373715 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kglnr\" (UniqueName: \"kubernetes.io/projected/f765720e-d5e0-487a-b99d-ca3019597301-kube-api-access-kglnr\") pod \"certified-operators-n8xk5\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.373737 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-utilities\") pod \"certified-operators-n8xk5\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.374375 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-utilities\") pod \"certified-operators-n8xk5\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.374592 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-catalog-content\") pod \"certified-operators-n8xk5\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.405130 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kglnr\" (UniqueName: \"kubernetes.io/projected/f765720e-d5e0-487a-b99d-ca3019597301-kube-api-access-kglnr\") pod \"certified-operators-n8xk5\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.539397 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.889818 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8xk5"] Jan 04 00:43:58 crc kubenswrapper[5108]: I0104 00:43:58.899554 5108 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 04 00:43:59 crc kubenswrapper[5108]: I0104 00:43:59.249957 5108 generic.go:358] "Generic (PLEG): container finished" podID="f765720e-d5e0-487a-b99d-ca3019597301" containerID="6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440" exitCode=0 Jan 04 00:43:59 crc kubenswrapper[5108]: I0104 00:43:59.250060 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8xk5" event={"ID":"f765720e-d5e0-487a-b99d-ca3019597301","Type":"ContainerDied","Data":"6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440"} Jan 04 00:43:59 crc kubenswrapper[5108]: I0104 00:43:59.250521 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8xk5" event={"ID":"f765720e-d5e0-487a-b99d-ca3019597301","Type":"ContainerStarted","Data":"4a5a68c67ff282a40a2bf29a011d1571595ebb1ba8937574f18e383bb794fbcb"} Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.159286 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29458124-bpnkc"] Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.165841 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458124-bpnkc" Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.166327 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458124-bpnkc"] Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.169118 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.169465 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-s7k94\"" Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.169781 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.247007 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvddn\" (UniqueName: \"kubernetes.io/projected/dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3-kube-api-access-qvddn\") pod \"auto-csr-approver-29458124-bpnkc\" (UID: \"dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3\") " pod="openshift-infra/auto-csr-approver-29458124-bpnkc" Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.262566 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8xk5" event={"ID":"f765720e-d5e0-487a-b99d-ca3019597301","Type":"ContainerStarted","Data":"b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2"} Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.349389 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qvddn\" (UniqueName: \"kubernetes.io/projected/dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3-kube-api-access-qvddn\") pod \"auto-csr-approver-29458124-bpnkc\" (UID: \"dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3\") " pod="openshift-infra/auto-csr-approver-29458124-bpnkc" Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.371396 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvddn\" (UniqueName: \"kubernetes.io/projected/dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3-kube-api-access-qvddn\") pod \"auto-csr-approver-29458124-bpnkc\" (UID: \"dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3\") " pod="openshift-infra/auto-csr-approver-29458124-bpnkc" Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.491456 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458124-bpnkc" Jan 04 00:44:00 crc kubenswrapper[5108]: I0104 00:44:00.918013 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29458124-bpnkc"] Jan 04 00:44:00 crc kubenswrapper[5108]: W0104 00:44:00.922221 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc1a2981_1b9a_473b_a99c_9dd1d5d1a1b3.slice/crio-90fe115bef25e72b1c63929b501a2fad9fc2d9b2f8a649e61b383c661507afe5 WatchSource:0}: Error finding container 90fe115bef25e72b1c63929b501a2fad9fc2d9b2f8a649e61b383c661507afe5: Status 404 returned error can't find the container with id 90fe115bef25e72b1c63929b501a2fad9fc2d9b2f8a649e61b383c661507afe5 Jan 04 00:44:01 crc kubenswrapper[5108]: I0104 00:44:01.276385 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458124-bpnkc" event={"ID":"dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3","Type":"ContainerStarted","Data":"90fe115bef25e72b1c63929b501a2fad9fc2d9b2f8a649e61b383c661507afe5"} Jan 04 00:44:01 crc kubenswrapper[5108]: I0104 00:44:01.279864 5108 generic.go:358] "Generic (PLEG): container finished" podID="f765720e-d5e0-487a-b99d-ca3019597301" containerID="b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2" exitCode=0 Jan 04 00:44:01 crc kubenswrapper[5108]: I0104 00:44:01.279919 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8xk5" event={"ID":"f765720e-d5e0-487a-b99d-ca3019597301","Type":"ContainerDied","Data":"b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2"} Jan 04 00:44:02 crc kubenswrapper[5108]: I0104 00:44:02.295533 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8xk5" event={"ID":"f765720e-d5e0-487a-b99d-ca3019597301","Type":"ContainerStarted","Data":"28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d"} Jan 04 00:44:02 crc kubenswrapper[5108]: I0104 00:44:02.327323 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n8xk5" podStartSLOduration=3.59932184 podStartE2EDuration="4.327298986s" podCreationTimestamp="2026-01-04 00:43:58 +0000 UTC" firstStartedPulling="2026-01-04 00:43:59.251045671 +0000 UTC m=+2013.239610757" lastFinishedPulling="2026-01-04 00:43:59.979022817 +0000 UTC m=+2013.967587903" observedRunningTime="2026-01-04 00:44:02.320106399 +0000 UTC m=+2016.308671515" watchObservedRunningTime="2026-01-04 00:44:02.327298986 +0000 UTC m=+2016.315864092" Jan 04 00:44:03 crc kubenswrapper[5108]: I0104 00:44:03.305979 5108 generic.go:358] "Generic (PLEG): container finished" podID="dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3" containerID="3b4c2fcd05e3b39ed8fae63684723de6a35ca26502fc7eb440af8d446b8033f5" exitCode=0 Jan 04 00:44:03 crc kubenswrapper[5108]: I0104 00:44:03.306160 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458124-bpnkc" event={"ID":"dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3","Type":"ContainerDied","Data":"3b4c2fcd05e3b39ed8fae63684723de6a35ca26502fc7eb440af8d446b8033f5"} Jan 04 00:44:04 crc kubenswrapper[5108]: I0104 00:44:04.618189 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458124-bpnkc" Jan 04 00:44:04 crc kubenswrapper[5108]: I0104 00:44:04.637993 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvddn\" (UniqueName: \"kubernetes.io/projected/dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3-kube-api-access-qvddn\") pod \"dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3\" (UID: \"dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3\") " Jan 04 00:44:04 crc kubenswrapper[5108]: I0104 00:44:04.646478 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3-kube-api-access-qvddn" (OuterVolumeSpecName: "kube-api-access-qvddn") pod "dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3" (UID: "dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3"). InnerVolumeSpecName "kube-api-access-qvddn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:44:04 crc kubenswrapper[5108]: I0104 00:44:04.740308 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qvddn\" (UniqueName: \"kubernetes.io/projected/dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3-kube-api-access-qvddn\") on node \"crc\" DevicePath \"\"" Jan 04 00:44:05 crc kubenswrapper[5108]: I0104 00:44:05.328291 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29458124-bpnkc" event={"ID":"dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3","Type":"ContainerDied","Data":"90fe115bef25e72b1c63929b501a2fad9fc2d9b2f8a649e61b383c661507afe5"} Jan 04 00:44:05 crc kubenswrapper[5108]: I0104 00:44:05.328349 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90fe115bef25e72b1c63929b501a2fad9fc2d9b2f8a649e61b383c661507afe5" Jan 04 00:44:05 crc kubenswrapper[5108]: I0104 00:44:05.328359 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29458124-bpnkc" Jan 04 00:44:05 crc kubenswrapper[5108]: I0104 00:44:05.697894 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29458118-dfc8d"] Jan 04 00:44:05 crc kubenswrapper[5108]: I0104 00:44:05.702888 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29458118-dfc8d"] Jan 04 00:44:06 crc kubenswrapper[5108]: I0104 00:44:06.460517 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c0d7f7-1057-4fd8-ac9c-af9739624339" path="/var/lib/kubelet/pods/c5c0d7f7-1057-4fd8-ac9c-af9739624339/volumes" Jan 04 00:44:06 crc kubenswrapper[5108]: I0104 00:44:06.460720 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:44:06 crc kubenswrapper[5108]: E0104 00:44:06.461453 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:44:08 crc kubenswrapper[5108]: I0104 00:44:08.539932 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:44:08 crc kubenswrapper[5108]: I0104 00:44:08.540073 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:44:08 crc kubenswrapper[5108]: I0104 00:44:08.598394 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:44:09 crc kubenswrapper[5108]: I0104 00:44:09.425106 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:44:09 crc kubenswrapper[5108]: I0104 00:44:09.481332 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8xk5"] Jan 04 00:44:11 crc kubenswrapper[5108]: I0104 00:44:11.387545 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n8xk5" podUID="f765720e-d5e0-487a-b99d-ca3019597301" containerName="registry-server" containerID="cri-o://28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d" gracePeriod=2 Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.347840 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.397746 5108 generic.go:358] "Generic (PLEG): container finished" podID="f765720e-d5e0-487a-b99d-ca3019597301" containerID="28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d" exitCode=0 Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.397843 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8xk5" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.397989 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8xk5" event={"ID":"f765720e-d5e0-487a-b99d-ca3019597301","Type":"ContainerDied","Data":"28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d"} Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.398034 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8xk5" event={"ID":"f765720e-d5e0-487a-b99d-ca3019597301","Type":"ContainerDied","Data":"4a5a68c67ff282a40a2bf29a011d1571595ebb1ba8937574f18e383bb794fbcb"} Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.398060 5108 scope.go:117] "RemoveContainer" containerID="28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.422015 5108 scope.go:117] "RemoveContainer" containerID="b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.450972 5108 scope.go:117] "RemoveContainer" containerID="6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.478192 5108 scope.go:117] "RemoveContainer" containerID="28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d" Jan 04 00:44:12 crc kubenswrapper[5108]: E0104 00:44:12.479119 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d\": container with ID starting with 28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d not found: ID does not exist" containerID="28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.479180 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d"} err="failed to get container status \"28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d\": rpc error: code = NotFound desc = could not find container \"28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d\": container with ID starting with 28fa73a237646490068429888df36c09eb474c7ef7f1c5d9d76e83c17c17c38d not found: ID does not exist" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.479233 5108 scope.go:117] "RemoveContainer" containerID="b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2" Jan 04 00:44:12 crc kubenswrapper[5108]: E0104 00:44:12.479744 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2\": container with ID starting with b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2 not found: ID does not exist" containerID="b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.479815 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2"} err="failed to get container status \"b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2\": rpc error: code = NotFound desc = could not find container \"b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2\": container with ID starting with b952d9528ddcdad5880e8a82add713be13ea7ff2ceeeb505689b71a649ec89f2 not found: ID does not exist" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.479849 5108 scope.go:117] "RemoveContainer" containerID="6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440" Jan 04 00:44:12 crc kubenswrapper[5108]: E0104 00:44:12.480416 5108 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440\": container with ID starting with 6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440 not found: ID does not exist" containerID="6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.480455 5108 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440"} err="failed to get container status \"6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440\": rpc error: code = NotFound desc = could not find container \"6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440\": container with ID starting with 6c8980bdfdba9d40d6a6bd08b90319d548613b0017829e288b68aea46eec7440 not found: ID does not exist" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.492445 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kglnr\" (UniqueName: \"kubernetes.io/projected/f765720e-d5e0-487a-b99d-ca3019597301-kube-api-access-kglnr\") pod \"f765720e-d5e0-487a-b99d-ca3019597301\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.492583 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-catalog-content\") pod \"f765720e-d5e0-487a-b99d-ca3019597301\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.492799 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-utilities\") pod \"f765720e-d5e0-487a-b99d-ca3019597301\" (UID: \"f765720e-d5e0-487a-b99d-ca3019597301\") " Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.494152 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-utilities" (OuterVolumeSpecName: "utilities") pod "f765720e-d5e0-487a-b99d-ca3019597301" (UID: "f765720e-d5e0-487a-b99d-ca3019597301"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.494458 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.503611 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f765720e-d5e0-487a-b99d-ca3019597301-kube-api-access-kglnr" (OuterVolumeSpecName: "kube-api-access-kglnr") pod "f765720e-d5e0-487a-b99d-ca3019597301" (UID: "f765720e-d5e0-487a-b99d-ca3019597301"). InnerVolumeSpecName "kube-api-access-kglnr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.529701 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f765720e-d5e0-487a-b99d-ca3019597301" (UID: "f765720e-d5e0-487a-b99d-ca3019597301"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.595724 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kglnr\" (UniqueName: \"kubernetes.io/projected/f765720e-d5e0-487a-b99d-ca3019597301-kube-api-access-kglnr\") on node \"crc\" DevicePath \"\"" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.595761 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f765720e-d5e0-487a-b99d-ca3019597301-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.742028 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8xk5"] Jan 04 00:44:12 crc kubenswrapper[5108]: I0104 00:44:12.750142 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n8xk5"] Jan 04 00:44:14 crc kubenswrapper[5108]: I0104 00:44:14.466529 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f765720e-d5e0-487a-b99d-ca3019597301" path="/var/lib/kubelet/pods/f765720e-d5e0-487a-b99d-ca3019597301/volumes" Jan 04 00:44:20 crc kubenswrapper[5108]: I0104 00:44:20.450777 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:44:20 crc kubenswrapper[5108]: E0104 00:44:20.453578 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:44:31 crc kubenswrapper[5108]: I0104 00:44:31.449558 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:44:31 crc kubenswrapper[5108]: E0104 00:44:31.451524 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:44:33 crc kubenswrapper[5108]: I0104 00:44:33.603524 5108 scope.go:117] "RemoveContainer" containerID="981124768e1215576bab1ae7e2b3dc25840da8080e151bff2aa7ef69d38ac239" Jan 04 00:44:45 crc kubenswrapper[5108]: I0104 00:44:45.449106 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:44:45 crc kubenswrapper[5108]: E0104 00:44:45.450665 5108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-njl5v_openshift-machine-config-operator(f377d71c-c91f-4a27-8276-7e06263de9f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" podUID="f377d71c-c91f-4a27-8276-7e06263de9f6" Jan 04 00:44:59 crc kubenswrapper[5108]: I0104 00:44:59.449776 5108 scope.go:117] "RemoveContainer" containerID="15c8656fd764eb20372a9f4856bcef683bbc77c220cdb81c7f3737071a288c57" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.000597 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-njl5v" event={"ID":"f377d71c-c91f-4a27-8276-7e06263de9f6","Type":"ContainerStarted","Data":"91fbd9cde547ad4a95f05ee488f18afd2c2c0cbb84fcb2efc253cfcbde14fac9"} Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.142879 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f"] Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.143965 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f765720e-d5e0-487a-b99d-ca3019597301" containerName="extract-utilities" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.143989 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765720e-d5e0-487a-b99d-ca3019597301" containerName="extract-utilities" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.144002 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3" containerName="oc" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.144010 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3" containerName="oc" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.144045 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f765720e-d5e0-487a-b99d-ca3019597301" containerName="registry-server" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.144055 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765720e-d5e0-487a-b99d-ca3019597301" containerName="registry-server" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.144077 5108 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f765720e-d5e0-487a-b99d-ca3019597301" containerName="extract-content" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.144084 5108 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765720e-d5e0-487a-b99d-ca3019597301" containerName="extract-content" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.144322 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="f765720e-d5e0-487a-b99d-ca3019597301" containerName="registry-server" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.144347 5108 memory_manager.go:356] "RemoveStaleState removing state" podUID="dc1a2981-1b9a-473b-a99c-9dd1d5d1a1b3" containerName="oc" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.151400 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.156911 5108 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.157323 5108 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.170943 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f"] Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.238109 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4a4538-48b2-4d67-9404-47885b4c5de0-config-volume\") pod \"collect-profiles-29458125-vht7f\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.238776 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldjdm\" (UniqueName: \"kubernetes.io/projected/5a4a4538-48b2-4d67-9404-47885b4c5de0-kube-api-access-ldjdm\") pod \"collect-profiles-29458125-vht7f\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.238807 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a4a4538-48b2-4d67-9404-47885b4c5de0-secret-volume\") pod \"collect-profiles-29458125-vht7f\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.341069 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldjdm\" (UniqueName: \"kubernetes.io/projected/5a4a4538-48b2-4d67-9404-47885b4c5de0-kube-api-access-ldjdm\") pod \"collect-profiles-29458125-vht7f\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.341145 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a4a4538-48b2-4d67-9404-47885b4c5de0-secret-volume\") pod \"collect-profiles-29458125-vht7f\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.341297 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4a4538-48b2-4d67-9404-47885b4c5de0-config-volume\") pod \"collect-profiles-29458125-vht7f\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.342602 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4a4538-48b2-4d67-9404-47885b4c5de0-config-volume\") pod \"collect-profiles-29458125-vht7f\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.349583 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a4a4538-48b2-4d67-9404-47885b4c5de0-secret-volume\") pod \"collect-profiles-29458125-vht7f\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.369292 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldjdm\" (UniqueName: \"kubernetes.io/projected/5a4a4538-48b2-4d67-9404-47885b4c5de0-kube-api-access-ldjdm\") pod \"collect-profiles-29458125-vht7f\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.437442 5108 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d56bd"] Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.444921 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.468653 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d56bd"] Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.492995 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.544829 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-catalog-content\") pod \"redhat-operators-d56bd\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.545468 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-utilities\") pod \"redhat-operators-d56bd\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.545504 5108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr9tg\" (UniqueName: \"kubernetes.io/projected/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-kube-api-access-nr9tg\") pod \"redhat-operators-d56bd\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.649169 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-utilities\") pod \"redhat-operators-d56bd\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.649243 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nr9tg\" (UniqueName: \"kubernetes.io/projected/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-kube-api-access-nr9tg\") pod \"redhat-operators-d56bd\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.649339 5108 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-catalog-content\") pod \"redhat-operators-d56bd\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.650013 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-catalog-content\") pod \"redhat-operators-d56bd\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.650320 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-utilities\") pod \"redhat-operators-d56bd\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.686523 5108 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr9tg\" (UniqueName: \"kubernetes.io/projected/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-kube-api-access-nr9tg\") pod \"redhat-operators-d56bd\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.767797 5108 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:00 crc kubenswrapper[5108]: I0104 00:45:00.957105 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f"] Jan 04 00:45:00 crc kubenswrapper[5108]: W0104 00:45:00.968014 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a4a4538_48b2_4d67_9404_47885b4c5de0.slice/crio-71218a36310306d0f1dfe08dfdb52a0bf3d4a64a7c1d0c751b110bf3c6150379 WatchSource:0}: Error finding container 71218a36310306d0f1dfe08dfdb52a0bf3d4a64a7c1d0c751b110bf3c6150379: Status 404 returned error can't find the container with id 71218a36310306d0f1dfe08dfdb52a0bf3d4a64a7c1d0c751b110bf3c6150379 Jan 04 00:45:01 crc kubenswrapper[5108]: I0104 00:45:01.033908 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" event={"ID":"5a4a4538-48b2-4d67-9404-47885b4c5de0","Type":"ContainerStarted","Data":"71218a36310306d0f1dfe08dfdb52a0bf3d4a64a7c1d0c751b110bf3c6150379"} Jan 04 00:45:01 crc kubenswrapper[5108]: I0104 00:45:01.048124 5108 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d56bd"] Jan 04 00:45:01 crc kubenswrapper[5108]: W0104 00:45:01.052574 5108 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a706c0f_2aa8_4c2d_a398_e2cf1d56b9e1.slice/crio-2990f652a86dd096865bd89fe80cc9cee6f24f5fc6caaf2ae6f2b83ad498d6ca WatchSource:0}: Error finding container 2990f652a86dd096865bd89fe80cc9cee6f24f5fc6caaf2ae6f2b83ad498d6ca: Status 404 returned error can't find the container with id 2990f652a86dd096865bd89fe80cc9cee6f24f5fc6caaf2ae6f2b83ad498d6ca Jan 04 00:45:02 crc kubenswrapper[5108]: I0104 00:45:02.044677 5108 generic.go:358] "Generic (PLEG): container finished" podID="5a4a4538-48b2-4d67-9404-47885b4c5de0" containerID="e90032cfc0c0efbeefe84d67588573787e3e41d8d8792bbe7d9401b411c49166" exitCode=0 Jan 04 00:45:02 crc kubenswrapper[5108]: I0104 00:45:02.044766 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" event={"ID":"5a4a4538-48b2-4d67-9404-47885b4c5de0","Type":"ContainerDied","Data":"e90032cfc0c0efbeefe84d67588573787e3e41d8d8792bbe7d9401b411c49166"} Jan 04 00:45:02 crc kubenswrapper[5108]: I0104 00:45:02.047453 5108 generic.go:358] "Generic (PLEG): container finished" podID="5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1" containerID="c6d06098045657c1264b0a9c0204e72a8b0d468fa0a0041a73a665b87ea85938" exitCode=0 Jan 04 00:45:02 crc kubenswrapper[5108]: I0104 00:45:02.047538 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d56bd" event={"ID":"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1","Type":"ContainerDied","Data":"c6d06098045657c1264b0a9c0204e72a8b0d468fa0a0041a73a665b87ea85938"} Jan 04 00:45:02 crc kubenswrapper[5108]: I0104 00:45:02.047572 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d56bd" event={"ID":"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1","Type":"ContainerStarted","Data":"2990f652a86dd096865bd89fe80cc9cee6f24f5fc6caaf2ae6f2b83ad498d6ca"} Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.058929 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d56bd" event={"ID":"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1","Type":"ContainerStarted","Data":"588221ee12052993fe37fdfcf98b62f987ef9cc43d96f27d1880e6cf15eb4398"} Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.347481 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.400387 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4a4538-48b2-4d67-9404-47885b4c5de0-config-volume\") pod \"5a4a4538-48b2-4d67-9404-47885b4c5de0\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.400589 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a4a4538-48b2-4d67-9404-47885b4c5de0-secret-volume\") pod \"5a4a4538-48b2-4d67-9404-47885b4c5de0\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.400734 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldjdm\" (UniqueName: \"kubernetes.io/projected/5a4a4538-48b2-4d67-9404-47885b4c5de0-kube-api-access-ldjdm\") pod \"5a4a4538-48b2-4d67-9404-47885b4c5de0\" (UID: \"5a4a4538-48b2-4d67-9404-47885b4c5de0\") " Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.401683 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4a4538-48b2-4d67-9404-47885b4c5de0-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a4a4538-48b2-4d67-9404-47885b4c5de0" (UID: "5a4a4538-48b2-4d67-9404-47885b4c5de0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.412177 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4a4538-48b2-4d67-9404-47885b4c5de0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a4a4538-48b2-4d67-9404-47885b4c5de0" (UID: "5a4a4538-48b2-4d67-9404-47885b4c5de0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.418473 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4a4538-48b2-4d67-9404-47885b4c5de0-kube-api-access-ldjdm" (OuterVolumeSpecName: "kube-api-access-ldjdm") pod "5a4a4538-48b2-4d67-9404-47885b4c5de0" (UID: "5a4a4538-48b2-4d67-9404-47885b4c5de0"). InnerVolumeSpecName "kube-api-access-ldjdm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.502550 5108 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4a4538-48b2-4d67-9404-47885b4c5de0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.502587 5108 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a4a4538-48b2-4d67-9404-47885b4c5de0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 04 00:45:03 crc kubenswrapper[5108]: I0104 00:45:03.502601 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ldjdm\" (UniqueName: \"kubernetes.io/projected/5a4a4538-48b2-4d67-9404-47885b4c5de0-kube-api-access-ldjdm\") on node \"crc\" DevicePath \"\"" Jan 04 00:45:04 crc kubenswrapper[5108]: I0104 00:45:04.069482 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" Jan 04 00:45:04 crc kubenswrapper[5108]: I0104 00:45:04.069500 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29458125-vht7f" event={"ID":"5a4a4538-48b2-4d67-9404-47885b4c5de0","Type":"ContainerDied","Data":"71218a36310306d0f1dfe08dfdb52a0bf3d4a64a7c1d0c751b110bf3c6150379"} Jan 04 00:45:04 crc kubenswrapper[5108]: I0104 00:45:04.070345 5108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71218a36310306d0f1dfe08dfdb52a0bf3d4a64a7c1d0c751b110bf3c6150379" Jan 04 00:45:04 crc kubenswrapper[5108]: I0104 00:45:04.079843 5108 generic.go:358] "Generic (PLEG): container finished" podID="5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1" containerID="588221ee12052993fe37fdfcf98b62f987ef9cc43d96f27d1880e6cf15eb4398" exitCode=0 Jan 04 00:45:04 crc kubenswrapper[5108]: I0104 00:45:04.079935 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d56bd" event={"ID":"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1","Type":"ContainerDied","Data":"588221ee12052993fe37fdfcf98b62f987ef9cc43d96f27d1880e6cf15eb4398"} Jan 04 00:45:04 crc kubenswrapper[5108]: I0104 00:45:04.423308 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k"] Jan 04 00:45:04 crc kubenswrapper[5108]: I0104 00:45:04.432351 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29458080-xfr7k"] Jan 04 00:45:04 crc kubenswrapper[5108]: I0104 00:45:04.459782 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a0c6ba9-a7b4-42c9-8121-790c1d9cb024" path="/var/lib/kubelet/pods/2a0c6ba9-a7b4-42c9-8121-790c1d9cb024/volumes" Jan 04 00:45:05 crc kubenswrapper[5108]: I0104 00:45:05.093824 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d56bd" event={"ID":"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1","Type":"ContainerStarted","Data":"95c15b5539542e0ab4b5d603021abf20c7113604c68ece81cbf83e770012286e"} Jan 04 00:45:05 crc kubenswrapper[5108]: I0104 00:45:05.118590 5108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d56bd" podStartSLOduration=4.368399408 podStartE2EDuration="5.118562954s" podCreationTimestamp="2026-01-04 00:45:00 +0000 UTC" firstStartedPulling="2026-01-04 00:45:02.049063885 +0000 UTC m=+2076.037628971" lastFinishedPulling="2026-01-04 00:45:02.799227431 +0000 UTC m=+2076.787792517" observedRunningTime="2026-01-04 00:45:05.114723818 +0000 UTC m=+2079.103288934" watchObservedRunningTime="2026-01-04 00:45:05.118562954 +0000 UTC m=+2079.107128040" Jan 04 00:45:10 crc kubenswrapper[5108]: I0104 00:45:10.767955 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:10 crc kubenswrapper[5108]: I0104 00:45:10.768676 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:10 crc kubenswrapper[5108]: I0104 00:45:10.813647 5108 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:11 crc kubenswrapper[5108]: I0104 00:45:11.212498 5108 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:11 crc kubenswrapper[5108]: I0104 00:45:11.262458 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d56bd"] Jan 04 00:45:13 crc kubenswrapper[5108]: I0104 00:45:13.168979 5108 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d56bd" podUID="5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1" containerName="registry-server" containerID="cri-o://95c15b5539542e0ab4b5d603021abf20c7113604c68ece81cbf83e770012286e" gracePeriod=2 Jan 04 00:45:15 crc kubenswrapper[5108]: I0104 00:45:15.193437 5108 generic.go:358] "Generic (PLEG): container finished" podID="5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1" containerID="95c15b5539542e0ab4b5d603021abf20c7113604c68ece81cbf83e770012286e" exitCode=0 Jan 04 00:45:15 crc kubenswrapper[5108]: I0104 00:45:15.194053 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d56bd" event={"ID":"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1","Type":"ContainerDied","Data":"95c15b5539542e0ab4b5d603021abf20c7113604c68ece81cbf83e770012286e"} Jan 04 00:45:16 crc kubenswrapper[5108]: I0104 00:45:16.866717 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:16 crc kubenswrapper[5108]: I0104 00:45:16.941463 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-catalog-content\") pod \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " Jan 04 00:45:16 crc kubenswrapper[5108]: I0104 00:45:16.941631 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr9tg\" (UniqueName: \"kubernetes.io/projected/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-kube-api-access-nr9tg\") pod \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " Jan 04 00:45:16 crc kubenswrapper[5108]: I0104 00:45:16.941795 5108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-utilities\") pod \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\" (UID: \"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1\") " Jan 04 00:45:16 crc kubenswrapper[5108]: I0104 00:45:16.943055 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-utilities" (OuterVolumeSpecName: "utilities") pod "5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1" (UID: "5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:45:16 crc kubenswrapper[5108]: I0104 00:45:16.950625 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-kube-api-access-nr9tg" (OuterVolumeSpecName: "kube-api-access-nr9tg") pod "5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1" (UID: "5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1"). InnerVolumeSpecName "kube-api-access-nr9tg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.043826 5108 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1" (UID: "5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.044155 5108 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.044212 5108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nr9tg\" (UniqueName: \"kubernetes.io/projected/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-kube-api-access-nr9tg\") on node \"crc\" DevicePath \"\"" Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.044229 5108 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.218910 5108 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d56bd" event={"ID":"5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1","Type":"ContainerDied","Data":"2990f652a86dd096865bd89fe80cc9cee6f24f5fc6caaf2ae6f2b83ad498d6ca"} Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.219499 5108 scope.go:117] "RemoveContainer" containerID="95c15b5539542e0ab4b5d603021abf20c7113604c68ece81cbf83e770012286e" Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.218950 5108 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d56bd" Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.255546 5108 scope.go:117] "RemoveContainer" containerID="588221ee12052993fe37fdfcf98b62f987ef9cc43d96f27d1880e6cf15eb4398" Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.262260 5108 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d56bd"] Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.268445 5108 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d56bd"] Jan 04 00:45:17 crc kubenswrapper[5108]: I0104 00:45:17.279075 5108 scope.go:117] "RemoveContainer" containerID="c6d06098045657c1264b0a9c0204e72a8b0d468fa0a0041a73a665b87ea85938" Jan 04 00:45:18 crc kubenswrapper[5108]: I0104 00:45:18.470361 5108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1" path="/var/lib/kubelet/pods/5a706c0f-2aa8-4c2d-a398-e2cf1d56b9e1/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515126334053024447 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015126334053017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015126327371016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015126327371015464 5ustar corecore